2025-09-27 21:16:17.861851 | Job console starting 2025-09-27 21:16:17.872237 | Updating git repos 2025-09-27 21:16:17.966256 | Cloning repos into workspace 2025-09-27 21:16:18.193985 | Restoring repo states 2025-09-27 21:16:18.224161 | Merging changes 2025-09-27 21:16:18.224181 | Checking out repos 2025-09-27 21:16:18.489053 | Preparing playbooks 2025-09-27 21:16:19.166623 | Running Ansible setup 2025-09-27 21:16:23.243954 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-27 21:16:23.998661 | 2025-09-27 21:16:23.998819 | PLAY [Base pre] 2025-09-27 21:16:24.023018 | 2025-09-27 21:16:24.023379 | TASK [Setup log path fact] 2025-09-27 21:16:24.069128 | orchestrator | ok 2025-09-27 21:16:24.087786 | 2025-09-27 21:16:24.087943 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-27 21:16:24.117576 | orchestrator | ok 2025-09-27 21:16:24.130140 | 2025-09-27 21:16:24.130268 | TASK [emit-job-header : Print job information] 2025-09-27 21:16:24.177637 | # Job Information 2025-09-27 21:16:24.177913 | Ansible Version: 2.16.14 2025-09-27 21:16:24.177973 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-09-27 21:16:24.178031 | Pipeline: post 2025-09-27 21:16:24.178073 | Executor: 521e9411259a 2025-09-27 21:16:24.178111 | Triggered by: https://github.com/osism/testbed/commit/631cb79f4d70d8c4487243e46d78b1592deefa08 2025-09-27 21:16:24.178149 | Event ID: 30038496-9be7-11f0-80e8-427b92c6bb71 2025-09-27 21:16:24.187734 | 2025-09-27 21:16:24.187859 | LOOP [emit-job-header : Print node information] 2025-09-27 21:16:24.306301 | orchestrator | ok: 2025-09-27 21:16:24.306606 | orchestrator | # Node Information 2025-09-27 21:16:24.306649 | orchestrator | Inventory Hostname: orchestrator 2025-09-27 21:16:24.306675 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-27 21:16:24.306697 | orchestrator | Username: zuul-testbed01 2025-09-27 21:16:24.306718 | orchestrator | Distro: Debian 12.12 2025-09-27 21:16:24.306741 | orchestrator | Provider: static-testbed 2025-09-27 21:16:24.306762 | orchestrator | Region: 2025-09-27 21:16:24.306784 | orchestrator | Label: testbed-orchestrator 2025-09-27 21:16:24.306804 | orchestrator | Product Name: OpenStack Nova 2025-09-27 21:16:24.306824 | orchestrator | Interface IP: 81.163.193.140 2025-09-27 21:16:24.334920 | 2025-09-27 21:16:24.335060 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-27 21:16:24.799254 | orchestrator -> localhost | changed 2025-09-27 21:16:24.807714 | 2025-09-27 21:16:24.807846 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-27 21:16:25.825952 | orchestrator -> localhost | changed 2025-09-27 21:16:25.842163 | 2025-09-27 21:16:25.842365 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-27 21:16:26.119573 | orchestrator -> localhost | ok 2025-09-27 21:16:26.128626 | 2025-09-27 21:16:26.128763 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-27 21:16:26.150153 | orchestrator | ok 2025-09-27 21:16:26.166969 | orchestrator | included: /var/lib/zuul/builds/58989b4fd94645e9af60764394f17cd1/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-27 21:16:26.175070 | 2025-09-27 21:16:26.175174 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-27 21:16:27.140229 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-27 21:16:27.140457 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/58989b4fd94645e9af60764394f17cd1/work/58989b4fd94645e9af60764394f17cd1_id_rsa 2025-09-27 21:16:27.140495 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/58989b4fd94645e9af60764394f17cd1/work/58989b4fd94645e9af60764394f17cd1_id_rsa.pub 2025-09-27 21:16:27.140539 | orchestrator -> localhost | The key fingerprint is: 2025-09-27 21:16:27.140567 | orchestrator -> localhost | SHA256:fVKeuEWX5j3GJ4YmrCSB+lkP2MogJRpYOZ/lknFdsks zuul-build-sshkey 2025-09-27 21:16:27.140590 | orchestrator -> localhost | The key's randomart image is: 2025-09-27 21:16:27.140626 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-27 21:16:27.140649 | orchestrator -> localhost | | .. .... | 2025-09-27 21:16:27.140670 | orchestrator -> localhost | |..o ..o .o . | 2025-09-27 21:16:27.140690 | orchestrator -> localhost | |+ .o.B. E o + | 2025-09-27 21:16:27.140710 | orchestrator -> localhost | |.+ .=o.o + = *.. | 2025-09-27 21:16:27.140730 | orchestrator -> localhost | |o o ..= S * B ++o| 2025-09-27 21:16:27.140758 | orchestrator -> localhost | | . + + = . B ...o| 2025-09-27 21:16:27.140778 | orchestrator -> localhost | | = o . | 2025-09-27 21:16:27.140798 | orchestrator -> localhost | | | 2025-09-27 21:16:27.140818 | orchestrator -> localhost | | | 2025-09-27 21:16:27.140838 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-27 21:16:27.140892 | orchestrator -> localhost | ok: Runtime: 0:00:00.493284 2025-09-27 21:16:27.148761 | 2025-09-27 21:16:27.148875 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-27 21:16:27.187887 | orchestrator | ok 2025-09-27 21:16:27.197936 | orchestrator | included: /var/lib/zuul/builds/58989b4fd94645e9af60764394f17cd1/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-27 21:16:27.207171 | 2025-09-27 21:16:27.207268 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-27 21:16:27.221243 | orchestrator | skipping: Conditional result was False 2025-09-27 21:16:27.230332 | 2025-09-27 21:16:27.230451 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-27 21:16:27.810087 | orchestrator | changed 2025-09-27 21:16:27.819492 | 2025-09-27 21:16:27.819641 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-27 21:16:28.119684 | orchestrator | ok 2025-09-27 21:16:28.129453 | 2025-09-27 21:16:28.129591 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-27 21:16:28.545752 | orchestrator | ok 2025-09-27 21:16:28.551968 | 2025-09-27 21:16:28.552072 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-27 21:16:28.974047 | orchestrator | ok 2025-09-27 21:16:28.983021 | 2025-09-27 21:16:28.983140 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-27 21:16:29.006881 | orchestrator | skipping: Conditional result was False 2025-09-27 21:16:29.014807 | 2025-09-27 21:16:29.014938 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-27 21:16:29.466167 | orchestrator -> localhost | changed 2025-09-27 21:16:29.491087 | 2025-09-27 21:16:29.491220 | TASK [add-build-sshkey : Add back temp key] 2025-09-27 21:16:29.829163 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/58989b4fd94645e9af60764394f17cd1/work/58989b4fd94645e9af60764394f17cd1_id_rsa (zuul-build-sshkey) 2025-09-27 21:16:29.829410 | orchestrator -> localhost | ok: Runtime: 0:00:00.018423 2025-09-27 21:16:29.836678 | 2025-09-27 21:16:29.836786 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-27 21:16:30.233254 | orchestrator | ok 2025-09-27 21:16:30.239908 | 2025-09-27 21:16:30.240012 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-27 21:16:30.273998 | orchestrator | skipping: Conditional result was False 2025-09-27 21:16:30.331477 | 2025-09-27 21:16:30.331643 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-27 21:16:30.764826 | orchestrator | ok 2025-09-27 21:16:30.780911 | 2025-09-27 21:16:30.781029 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-27 21:16:30.820565 | orchestrator | ok 2025-09-27 21:16:30.827709 | 2025-09-27 21:16:30.827810 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-27 21:16:31.124454 | orchestrator -> localhost | ok 2025-09-27 21:16:31.132062 | 2025-09-27 21:16:31.132169 | TASK [validate-host : Collect information about the host] 2025-09-27 21:16:32.351069 | orchestrator | ok 2025-09-27 21:16:32.366292 | 2025-09-27 21:16:32.366406 | TASK [validate-host : Sanitize hostname] 2025-09-27 21:16:32.429698 | orchestrator | ok 2025-09-27 21:16:32.437959 | 2025-09-27 21:16:32.438095 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-27 21:16:33.010158 | orchestrator -> localhost | changed 2025-09-27 21:16:33.022112 | 2025-09-27 21:16:33.022284 | TASK [validate-host : Collect information about zuul worker] 2025-09-27 21:16:33.437113 | orchestrator | ok 2025-09-27 21:16:33.442905 | 2025-09-27 21:16:33.443024 | TASK [validate-host : Write out all zuul information for each host] 2025-09-27 21:16:34.038633 | orchestrator -> localhost | changed 2025-09-27 21:16:34.049340 | 2025-09-27 21:16:34.049444 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-27 21:16:34.334394 | orchestrator | ok 2025-09-27 21:16:34.342686 | 2025-09-27 21:16:34.342805 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-27 21:17:07.318023 | orchestrator | changed: 2025-09-27 21:17:07.318240 | orchestrator | .d..t...... src/ 2025-09-27 21:17:07.318277 | orchestrator | .d..t...... src/github.com/ 2025-09-27 21:17:07.318302 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-27 21:17:07.318324 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-27 21:17:07.318344 | orchestrator | RedHat.yml 2025-09-27 21:17:07.331329 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-27 21:17:07.331347 | orchestrator | RedHat.yml 2025-09-27 21:17:07.331399 | orchestrator | = 2.2.0"... 2025-09-27 21:17:19.857387 | orchestrator | 21:17:19.857 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-09-27 21:17:19.881069 | orchestrator | 21:17:19.880 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-09-27 21:17:20.354000 | orchestrator | 21:17:20.353 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-27 21:17:21.003301 | orchestrator | 21:17:21.003 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-27 21:17:21.080326 | orchestrator | 21:17:21.080 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-27 21:17:21.550492 | orchestrator | 21:17:21.550 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-27 21:17:21.625248 | orchestrator | 21:17:21.625 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-27 21:17:22.306115 | orchestrator | 21:17:22.305 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-27 21:17:22.306181 | orchestrator | 21:17:22.305 STDOUT terraform: Providers are signed by their developers. 2025-09-27 21:17:22.306190 | orchestrator | 21:17:22.305 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-27 21:17:22.306196 | orchestrator | 21:17:22.305 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-27 21:17:22.306201 | orchestrator | 21:17:22.305 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-27 21:17:22.306212 | orchestrator | 21:17:22.305 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-27 21:17:22.306216 | orchestrator | 21:17:22.305 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-27 21:17:22.306533 | orchestrator | 21:17:22.306 STDOUT terraform: you run "tofu init" in the future. 2025-09-27 21:17:22.306540 | orchestrator | 21:17:22.306 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-27 21:17:22.306544 | orchestrator | 21:17:22.306 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-27 21:17:22.306548 | orchestrator | 21:17:22.306 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-27 21:17:22.306552 | orchestrator | 21:17:22.306 STDOUT terraform: should now work. 2025-09-27 21:17:22.306556 | orchestrator | 21:17:22.306 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-27 21:17:22.306560 | orchestrator | 21:17:22.306 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-27 21:17:22.306564 | orchestrator | 21:17:22.306 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-27 21:17:22.633651 | orchestrator | 21:17:22.633 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-27 21:17:22.633737 | orchestrator | 21:17:22.633 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-27 21:17:22.633760 | orchestrator | 21:17:22.633 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-27 21:17:22.633770 | orchestrator | 21:17:22.633 STDOUT terraform: for this configuration. 2025-09-27 21:17:22.853400 | orchestrator | 21:17:22.852 STDOUT terraform: ci.auto.tfvars 2025-09-27 21:17:22.868206 | orchestrator | 21:17:22.868 STDOUT terraform: default_custom.tf 2025-09-27 21:17:24.186631 | orchestrator | 21:17:24.185 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-27 21:17:24.747361 | orchestrator | 21:17:24.747 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-27 21:17:24.950055 | orchestrator | 21:17:24.949 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-27 21:17:24.950132 | orchestrator | 21:17:24.949 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-27 21:17:24.950138 | orchestrator | 21:17:24.950 STDOUT terraform:  + create 2025-09-27 21:17:24.950144 | orchestrator | 21:17:24.950 STDOUT terraform:  <= read (data resources) 2025-09-27 21:17:24.950149 | orchestrator | 21:17:24.950 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-27 21:17:24.950155 | orchestrator | 21:17:24.950 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-27 21:17:24.950166 | orchestrator | 21:17:24.950 STDOUT terraform:  # (config refers to values not yet known) 2025-09-27 21:17:24.950190 | orchestrator | 21:17:24.950 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-27 21:17:24.950218 | orchestrator | 21:17:24.950 STDOUT terraform:  + checksum = (known after apply) 2025-09-27 21:17:24.950254 | orchestrator | 21:17:24.950 STDOUT terraform:  + created_at = (known after apply) 2025-09-27 21:17:24.950277 | orchestrator | 21:17:24.950 STDOUT terraform:  + file = (known after apply) 2025-09-27 21:17:24.950306 | orchestrator | 21:17:24.950 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.950340 | orchestrator | 21:17:24.950 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 21:17:24.950359 | orchestrator | 21:17:24.950 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-27 21:17:24.950388 | orchestrator | 21:17:24.950 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-27 21:17:24.950407 | orchestrator | 21:17:24.950 STDOUT terraform:  + most_recent = true 2025-09-27 21:17:24.950436 | orchestrator | 21:17:24.950 STDOUT terraform:  + name = (known after apply) 2025-09-27 21:17:24.950466 | orchestrator | 21:17:24.950 STDOUT terraform:  + protected = (known after apply) 2025-09-27 21:17:24.950504 | orchestrator | 21:17:24.950 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.950524 | orchestrator | 21:17:24.950 STDOUT terraform:  + schema = (known after apply) 2025-09-27 21:17:24.950552 | orchestrator | 21:17:24.950 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-27 21:17:24.950589 | orchestrator | 21:17:24.950 STDOUT terraform:  + tags = (known after apply) 2025-09-27 21:17:24.950613 | orchestrator | 21:17:24.950 STDOUT terraform:  + updated_at = (known after apply) 2025-09-27 21:17:24.950636 | orchestrator | 21:17:24.950 STDOUT terraform:  } 2025-09-27 21:17:24.950693 | orchestrator | 21:17:24.950 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-27 21:17:24.950728 | orchestrator | 21:17:24.950 STDOUT terraform:  # (config refers to values not yet known) 2025-09-27 21:17:24.950761 | orchestrator | 21:17:24.950 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-27 21:17:24.950790 | orchestrator | 21:17:24.950 STDOUT terraform:  + checksum = (known after apply) 2025-09-27 21:17:24.950815 | orchestrator | 21:17:24.950 STDOUT terraform:  + created_at = (known after apply) 2025-09-27 21:17:24.950842 | orchestrator | 21:17:24.950 STDOUT terraform:  + file = (known after apply) 2025-09-27 21:17:24.950872 | orchestrator | 21:17:24.950 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.950897 | orchestrator | 21:17:24.950 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 21:17:24.950925 | orchestrator | 21:17:24.950 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-27 21:17:24.950955 | orchestrator | 21:17:24.950 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-27 21:17:24.950974 | orchestrator | 21:17:24.950 STDOUT terraform:  + most_recent = true 2025-09-27 21:17:24.951001 | orchestrator | 21:17:24.950 STDOUT terraform:  + name = (known after apply) 2025-09-27 21:17:24.951033 | orchestrator | 21:17:24.950 STDOUT terraform:  + protected = (known after apply) 2025-09-27 21:17:24.951061 | orchestrator | 21:17:24.951 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.951096 | orchestrator | 21:17:24.951 STDOUT terraform:  + schema = (known after apply) 2025-09-27 21:17:24.951116 | orchestrator | 21:17:24.951 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-27 21:17:24.951144 | orchestrator | 21:17:24.951 STDOUT terraform:  + tags = (known after apply) 2025-09-27 21:17:24.951195 | orchestrator | 21:17:24.951 STDOUT terraform:  + updated_at = (known after apply) 2025-09-27 21:17:24.951202 | orchestrator | 21:17:24.951 STDOUT terraform:  } 2025-09-27 21:17:24.951242 | orchestrator | 21:17:24.951 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-27 21:17:24.951274 | orchestrator | 21:17:24.951 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-27 21:17:24.951309 | orchestrator | 21:17:24.951 STDOUT terraform:  + content = (known after apply) 2025-09-27 21:17:24.951356 | orchestrator | 21:17:24.951 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-27 21:17:24.951378 | orchestrator | 21:17:24.951 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-27 21:17:24.951413 | orchestrator | 21:17:24.951 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-27 21:17:24.951458 | orchestrator | 21:17:24.951 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-27 21:17:24.951513 | orchestrator | 21:17:24.951 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-27 21:17:24.951527 | orchestrator | 21:17:24.951 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-27 21:17:24.951533 | orchestrator | 21:17:24.951 STDOUT terraform:  + directory_permission = "0777" 2025-09-27 21:17:24.951545 | orchestrator | 21:17:24.951 STDOUT terraform:  + file_permission = "0644" 2025-09-27 21:17:24.951588 | orchestrator | 21:17:24.951 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-27 21:17:24.951696 | orchestrator | 21:17:24.951 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.951706 | orchestrator | 21:17:24.951 STDOUT terraform:  } 2025-09-27 21:17:24.951710 | orchestrator | 21:17:24.951 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-27 21:17:24.951714 | orchestrator | 21:17:24.951 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-27 21:17:24.951720 | orchestrator | 21:17:24.951 STDOUT terraform:  + content = (known after apply) 2025-09-27 21:17:24.951786 | orchestrator | 21:17:24.951 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-27 21:17:24.951792 | orchestrator | 21:17:24.951 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-27 21:17:24.951806 | orchestrator | 21:17:24.951 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-27 21:17:24.951853 | orchestrator | 21:17:24.951 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-27 21:17:24.951925 | orchestrator | 21:17:24.951 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-27 21:17:24.951931 | orchestrator | 21:17:24.951 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-27 21:17:24.951937 | orchestrator | 21:17:24.951 STDOUT terraform:  + directory_permission = "0777" 2025-09-27 21:17:24.951943 | orchestrator | 21:17:24.951 STDOUT terraform:  + file_permission = "0644" 2025-09-27 21:17:24.951977 | orchestrator | 21:17:24.951 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-27 21:17:24.952011 | orchestrator | 21:17:24.951 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.952018 | orchestrator | 21:17:24.952 STDOUT terraform:  } 2025-09-27 21:17:24.952063 | orchestrator | 21:17:24.952 STDOUT terraform:  # local_file.inventory will be created 2025-09-27 21:17:24.952068 | orchestrator | 21:17:24.952 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-27 21:17:24.952106 | orchestrator | 21:17:24.952 STDOUT terraform:  + content = (known after apply) 2025-09-27 21:17:24.952185 | orchestrator | 21:17:24.952 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-27 21:17:24.952228 | orchestrator | 21:17:24.952 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-27 21:17:24.952272 | orchestrator | 21:17:24.952 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-27 21:17:24.952280 | orchestrator | 21:17:24.952 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-27 21:17:24.952284 | orchestrator | 21:17:24.952 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-27 21:17:24.952290 | orchestrator | 21:17:24.952 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-27 21:17:24.952295 | orchestrator | 21:17:24.952 STDOUT terraform:  + directory_permission = "0777" 2025-09-27 21:17:24.952328 | orchestrator | 21:17:24.952 STDOUT terraform:  + file_permission = "0644" 2025-09-27 21:17:24.952344 | orchestrator | 21:17:24.952 STDOUT terraform:  + filename = "inventory.ci" 2025-09-27 21:17:24.952412 | orchestrator | 21:17:24.952 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.952472 | orchestrator | 21:17:24.952 STDOUT terraform:  } 2025-09-27 21:17:24.952482 | orchestrator | 21:17:24.952 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-27 21:17:24.952486 | orchestrator | 21:17:24.952 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-27 21:17:24.952490 | orchestrator | 21:17:24.952 STDOUT terraform:  + content = (sensitive value) 2025-09-27 21:17:24.952505 | orchestrator | 21:17:24.952 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-27 21:17:24.952559 | orchestrator | 21:17:24.952 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-27 21:17:24.952565 | orchestrator | 21:17:24.952 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-27 21:17:24.952610 | orchestrator | 21:17:24.952 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-27 21:17:24.952708 | orchestrator | 21:17:24.952 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-27 21:17:24.952718 | orchestrator | 21:17:24.952 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-27 21:17:24.952722 | orchestrator | 21:17:24.952 STDOUT terraform:  + directory_permission = "0700" 2025-09-27 21:17:24.952736 | orchestrator | 21:17:24.952 STDOUT terraform:  + file_permission = "0600" 2025-09-27 21:17:24.952760 | orchestrator | 21:17:24.952 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-27 21:17:24.952792 | orchestrator | 21:17:24.952 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.952799 | orchestrator | 21:17:24.952 STDOUT terraform:  } 2025-09-27 21:17:24.952835 | orchestrator | 21:17:24.952 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-27 21:17:24.952852 | orchestrator | 21:17:24.952 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-27 21:17:24.952919 | orchestrator | 21:17:24.952 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.952948 | orchestrator | 21:17:24.952 STDOUT terraform:  } 2025-09-27 21:17:24.952963 | orchestrator | 21:17:24.952 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-27 21:17:24.952969 | orchestrator | 21:17:24.952 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-27 21:17:24.953038 | orchestrator | 21:17:24.952 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 21:17:24.953058 | orchestrator | 21:17:24.952 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.953064 | orchestrator | 21:17:24.953 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.953149 | orchestrator | 21:17:24.953 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 21:17:24.953159 | orchestrator | 21:17:24.953 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 21:17:24.953164 | orchestrator | 21:17:24.953 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-27 21:17:24.953246 | orchestrator | 21:17:24.953 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.953253 | orchestrator | 21:17:24.953 STDOUT terraform:  + size = 80 2025-09-27 21:17:24.953263 | orchestrator | 21:17:24.953 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 21:17:24.953267 | orchestrator | 21:17:24.953 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 21:17:24.953270 | orchestrator | 21:17:24.953 STDOUT terraform:  } 2025-09-27 21:17:24.953592 | orchestrator | 21:17:24.953 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-27 21:17:24.953633 | orchestrator | 21:17:24.953 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-27 21:17:24.953667 | orchestrator | 21:17:24.953 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 21:17:24.953699 | orchestrator | 21:17:24.953 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.953712 | orchestrator | 21:17:24.953 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.953717 | orchestrator | 21:17:24.953 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 21:17:24.953721 | orchestrator | 21:17:24.953 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 21:17:24.953759 | orchestrator | 21:17:24.953 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-27 21:17:24.953793 | orchestrator | 21:17:24.953 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.953828 | orchestrator | 21:17:24.953 STDOUT terraform:  + size = 80 2025-09-27 21:17:24.953833 | orchestrator | 21:17:24.953 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 21:17:24.953847 | orchestrator | 21:17:24.953 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 21:17:24.953853 | orchestrator | 21:17:24.953 STDOUT terraform:  } 2025-09-27 21:17:24.954066 | orchestrator | 21:17:24.953 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-27 21:17:24.954164 | orchestrator | 21:17:24.954 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-27 21:17:24.954170 | orchestrator | 21:17:24.954 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 21:17:24.954174 | orchestrator | 21:17:24.954 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.954247 | orchestrator | 21:17:24.954 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.954253 | orchestrator | 21:17:24.954 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 21:17:24.954274 | orchestrator | 21:17:24.954 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 21:17:24.954357 | orchestrator | 21:17:24.954 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-27 21:17:24.954362 | orchestrator | 21:17:24.954 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.954366 | orchestrator | 21:17:24.954 STDOUT terraform:  + size = 80 2025-09-27 21:17:24.954381 | orchestrator | 21:17:24.954 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 21:17:24.954456 | orchestrator | 21:17:24.954 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 21:17:24.954469 | orchestrator | 21:17:24.954 STDOUT terraform:  } 2025-09-27 21:17:24.954721 | orchestrator | 21:17:24.954 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-27 21:17:24.954801 | orchestrator | 21:17:24.954 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-27 21:17:24.954808 | orchestrator | 21:17:24.954 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 21:17:24.954812 | orchestrator | 21:17:24.954 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.954872 | orchestrator | 21:17:24.954 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.954880 | orchestrator | 21:17:24.954 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 21:17:24.954913 | orchestrator | 21:17:24.954 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 21:17:24.954998 | orchestrator | 21:17:24.954 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-27 21:17:24.955010 | orchestrator | 21:17:24.954 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.957880 | orchestrator | 21:17:24.954 STDOUT terraform:  + size = 80 2025-09-27 21:17:24.958101 | orchestrator | 21:17:24.956 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 21:17:24.958118 | orchestrator | 21:17:24.956 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 21:17:24.958123 | orchestrator | 21:17:24.956 STDOUT terraform:  } 2025-09-27 21:17:24.959442 | orchestrator | 21:17:24.959 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-27 21:17:24.959471 | orchestrator | 21:17:24.959 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-27 21:17:24.959482 | orchestrator | 21:17:24.959 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 21:17:24.959488 | orchestrator | 21:17:24.959 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.959525 | orchestrator | 21:17:24.959 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.959560 | orchestrator | 21:17:24.959 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 21:17:24.959595 | orchestrator | 21:17:24.959 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 21:17:24.959654 | orchestrator | 21:17:24.959 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-27 21:17:24.959711 | orchestrator | 21:17:24.959 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.959740 | orchestrator | 21:17:24.959 STDOUT terraform:  + size = 80 2025-09-27 21:17:24.959758 | orchestrator | 21:17:24.959 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 21:17:24.959782 | orchestrator | 21:17:24.959 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 21:17:24.959790 | orchestrator | 21:17:24.959 STDOUT terraform:  } 2025-09-27 21:17:24.959839 | orchestrator | 21:17:24.959 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-27 21:17:24.959883 | orchestrator | 21:17:24.959 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-27 21:17:24.959921 | orchestrator | 21:17:24.959 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 21:17:24.959946 | orchestrator | 21:17:24.959 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.959993 | orchestrator | 21:17:24.959 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.960017 | orchestrator | 21:17:24.959 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 21:17:24.960149 | orchestrator | 21:17:24.960 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 21:17:24.960165 | orchestrator | 21:17:24.960 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-27 21:17:24.960179 | orchestrator | 21:17:24.960 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.960183 | orchestrator | 21:17:24.960 STDOUT terraform:  + size = 80 2025-09-27 21:17:24.960256 | orchestrator | 21:17:24.960 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 21:17:24.960263 | orchestrator | 21:17:24.960 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 21:17:24.960269 | orchestrator | 21:17:24.960 STDOUT terraform:  } 2025-09-27 21:17:24.960331 | orchestrator | 21:17:24.960 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-27 21:17:24.960443 | orchestrator | 21:17:24.960 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-27 21:17:24.960503 | orchestrator | 21:17:24.960 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 21:17:24.960516 | orchestrator | 21:17:24.960 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.960520 | orchestrator | 21:17:24.960 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.960535 | orchestrator | 21:17:24.960 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 21:17:24.960539 | orchestrator | 21:17:24.960 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 21:17:24.960626 | orchestrator | 21:17:24.960 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-27 21:17:24.960634 | orchestrator | 21:17:24.960 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.960640 | orchestrator | 21:17:24.960 STDOUT terraform:  + size = 80 2025-09-27 21:17:24.960644 | orchestrator | 21:17:24.960 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 21:17:24.960710 | orchestrator | 21:17:24.960 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 21:17:24.960718 | orchestrator | 21:17:24.960 STDOUT terraform:  } 2025-09-27 21:17:24.960722 | orchestrator | 21:17:24.960 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-27 21:17:24.960755 | orchestrator | 21:17:24.960 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-27 21:17:24.960823 | orchestrator | 21:17:24.960 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 21:17:24.960828 | orchestrator | 21:17:24.960 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.960908 | orchestrator | 21:17:24.960 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.960913 | orchestrator | 21:17:24.960 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 21:17:24.960917 | orchestrator | 21:17:24.960 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-27 21:17:24.960950 | orchestrator | 21:17:24.960 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.960956 | orchestrator | 21:17:24.960 STDOUT terraform:  + size = 20 2025-09-27 21:17:24.961037 | orchestrator | 21:17:24.960 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 21:17:24.961042 | orchestrator | 21:17:24.960 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 21:17:24.961046 | orchestrator | 21:17:24.960 STDOUT terraform:  } 2025-09-27 21:17:24.961088 | orchestrator | 21:17:24.960 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-27 21:17:24.961123 | orchestrator | 21:17:24.961 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-27 21:17:24.961130 | orchestrator | 21:17:24.961 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 21:17:24.961171 | orchestrator | 21:17:24.961 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.961236 | orchestrator | 21:17:24.961 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.961269 | orchestrator | 21:17:24.961 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 21:17:24.961275 | orchestrator | 21:17:24.961 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-27 21:17:24.961279 | orchestrator | 21:17:24.961 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.961283 | orchestrator | 21:17:24.961 STDOUT terraform:  + size = 20 2025-09-27 21:17:24.961287 | orchestrator | 21:17:24.961 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 21:17:24.961309 | orchestrator | 21:17:24.961 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 21:17:24.961383 | orchestrator | 21:17:24.961 STDOUT terraform:  } 2025-09-27 21:17:24.962215 | orchestrator | 21:17:24.962 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-27 21:17:24.962297 | orchestrator | 21:17:24.962 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-27 21:17:24.962361 | orchestrator | 21:17:24.962 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 21:17:24.962404 | orchestrator | 21:17:24.962 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.962454 | orchestrator | 21:17:24.962 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.962511 | orchestrator | 21:17:24.962 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 21:17:24.962556 | orchestrator | 21:17:24.962 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-27 21:17:24.962614 | orchestrator | 21:17:24.962 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.962650 | orchestrator | 21:17:24.962 STDOUT terraform:  + size = 20 2025-09-27 21:17:24.962718 | orchestrator | 21:17:24.962 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 21:17:24.962769 | orchestrator | 21:17:24.962 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 21:17:24.962793 | orchestrator | 21:17:24.962 STDOUT terraform:  } 2025-09-27 21:17:24.962859 | orchestrator | 21:17:24.962 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-27 21:17:24.962941 | orchestrator | 21:17:24.962 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-27 21:17:24.963002 | orchestrator | 21:17:24.962 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 21:17:24.963035 | orchestrator | 21:17:24.963 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.963098 | orchestrator | 21:17:24.963 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.963155 | orchestrator | 21:17:24.963 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 21:17:24.963214 | orchestrator | 21:17:24.963 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-27 21:17:24.963256 | orchestrator | 21:17:24.963 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.963300 | orchestrator | 21:17:24.963 STDOUT terraform:  + size = 20 2025-09-27 21:17:24.963331 | orchestrator | 21:17:24.963 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 21:17:24.963376 | orchestrator | 21:17:24.963 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 21:17:24.963397 | orchestrator | 21:17:24.963 STDOUT terraform:  } 2025-09-27 21:17:24.963531 | orchestrator | 21:17:24.963 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-27 21:17:24.963592 | orchestrator | 21:17:24.963 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-27 21:17:24.963641 | orchestrator | 21:17:24.963 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 21:17:24.963697 | orchestrator | 21:17:24.963 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.963749 | orchestrator | 21:17:24.963 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.963891 | orchestrator | 21:17:24.963 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 21:17:24.963950 | orchestrator | 21:17:24.963 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-27 21:17:24.964054 | orchestrator | 21:17:24.963 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.964102 | orchestrator | 21:17:24.964 STDOUT terraform:  + size = 20 2025-09-27 21:17:24.964136 | orchestrator | 21:17:24.964 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 21:17:24.964285 | orchestrator | 21:17:24.964 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 21:17:24.964324 | orchestrator | 21:17:24.964 STDOUT terraform:  } 2025-09-27 21:17:24.964504 | orchestrator | 21:17:24.964 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-27 21:17:24.964731 | orchestrator | 21:17:24.964 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-27 21:17:24.964917 | orchestrator | 21:17:24.964 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 21:17:24.964991 | orchestrator | 21:17:24.964 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.965111 | orchestrator | 21:17:24.965 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.965185 | orchestrator | 21:17:24.965 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 21:17:24.965362 | orchestrator | 21:17:24.965 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-27 21:17:24.965524 | orchestrator | 21:17:24.965 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.965586 | orchestrator | 21:17:24.965 STDOUT terraform:  + size = 20 2025-09-27 21:17:24.965716 | orchestrator | 21:17:24.965 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 21:17:24.965944 | orchestrator | 21:17:24.965 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 21:17:24.966075 | orchestrator | 21:17:24.965 STDOUT terraform:  } 2025-09-27 21:17:24.966226 | orchestrator | 21:17:24.966 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-27 21:17:24.966432 | orchestrator | 21:17:24.966 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-27 21:17:24.966608 | orchestrator | 21:17:24.966 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 21:17:24.966789 | orchestrator | 21:17:24.966 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.966839 | orchestrator | 21:17:24.966 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.966936 | orchestrator | 21:17:24.966 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 21:17:24.967037 | orchestrator | 21:17:24.966 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-27 21:17:24.967125 | orchestrator | 21:17:24.967 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.967260 | orchestrator | 21:17:24.967 STDOUT terraform:  + size = 20 2025-09-27 21:17:24.967457 | orchestrator | 21:17:24.967 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 21:17:24.967594 | orchestrator | 21:17:24.967 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 21:17:24.967651 | orchestrator | 21:17:24.967 STDOUT terraform:  } 2025-09-27 21:17:24.967922 | orchestrator | 21:17:24.967 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-27 21:17:24.968133 | orchestrator | 21:17:24.967 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-27 21:17:24.968308 | orchestrator | 21:17:24.968 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 21:17:24.968385 | orchestrator | 21:17:24.968 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.968524 | orchestrator | 21:17:24.968 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.968744 | orchestrator | 21:17:24.968 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 21:17:24.968950 | orchestrator | 21:17:24.968 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-27 21:17:24.969142 | orchestrator | 21:17:24.969 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.969242 | orchestrator | 21:17:24.969 STDOUT terraform:  + size = 20 2025-09-27 21:17:24.969404 | orchestrator | 21:17:24.969 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 21:17:24.969519 | orchestrator | 21:17:24.969 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 21:17:24.969594 | orchestrator | 21:17:24.969 STDOUT terraform:  } 2025-09-27 21:17:24.969882 | orchestrator | 21:17:24.969 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-27 21:17:24.970254 | orchestrator | 21:17:24.969 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-27 21:17:24.970426 | orchestrator | 21:17:24.970 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 21:17:24.970577 | orchestrator | 21:17:24.970 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.970708 | orchestrator | 21:17:24.970 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.970765 | orchestrator | 21:17:24.970 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 21:17:24.970967 | orchestrator | 21:17:24.970 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-27 21:17:24.971069 | orchestrator | 21:17:24.970 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.971118 | orchestrator | 21:17:24.971 STDOUT terraform:  + size = 20 2025-09-27 21:17:24.971258 | orchestrator | 21:17:24.971 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 21:17:24.971384 | orchestrator | 21:17:24.971 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 21:17:24.971463 | orchestrator | 21:17:24.971 STDOUT terraform:  } 2025-09-27 21:17:24.971626 | orchestrator | 21:17:24.971 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-27 21:17:24.971770 | orchestrator | 21:17:24.971 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-27 21:17:24.971840 | orchestrator | 21:17:24.971 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-27 21:17:24.971930 | orchestrator | 21:17:24.971 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-27 21:17:24.972112 | orchestrator | 21:17:24.972 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-27 21:17:24.972223 | orchestrator | 21:17:24.972 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 21:17:24.972273 | orchestrator | 21:17:24.972 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.972317 | orchestrator | 21:17:24.972 STDOUT terraform:  + config_drive = true 2025-09-27 21:17:24.972474 | orchestrator | 21:17:24.972 STDOUT terraform:  + created = (known after apply) 2025-09-27 21:17:24.972551 | orchestrator | 21:17:24.972 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-27 21:17:24.972638 | orchestrator | 21:17:24.972 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-27 21:17:24.972815 | orchestrator | 21:17:24.972 STDOUT terraform:  + force_delete = false 2025-09-27 21:17:24.972925 | orchestrator | 21:17:24.972 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-27 21:17:24.973143 | orchestrator | 21:17:24.972 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.973188 | orchestrator | 21:17:24.973 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 21:17:24.973232 | orchestrator | 21:17:24.973 STDOUT terraform:  + image_name = (known after apply) 2025-09-27 21:17:24.973297 | orchestrator | 21:17:24.973 STDOUT terraform:  + key_pair = "testbed" 2025-09-27 21:17:24.973362 | orchestrator | 21:17:24.973 STDOUT terraform:  + name = "testbed-manager" 2025-09-27 21:17:24.973397 | orchestrator | 21:17:24.973 STDOUT terraform:  + power_state = "active" 2025-09-27 21:17:24.973457 | orchestrator | 21:17:24.973 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.973517 | orchestrator | 21:17:24.973 STDOUT terraform:  + security_groups = (known after apply) 2025-09-27 21:17:24.973548 | orchestrator | 21:17:24.973 STDOUT terraform:  + stop_before_destroy = false 2025-09-27 21:17:24.973608 | orchestrator | 21:17:24.973 STDOUT terraform:  + updated = (known after apply) 2025-09-27 21:17:24.973661 | orchestrator | 21:17:24.973 STDOUT terraform:  + user_data = (sensitive value) 2025-09-27 21:17:24.973713 | orchestrator | 21:17:24.973 STDOUT terraform:  + block_device { 2025-09-27 21:17:24.973761 | orchestrator | 21:17:24.973 STDOUT terraform:  + boot_index = 0 2025-09-27 21:17:24.973796 | orchestrator | 21:17:24.973 STDOUT terraform:  + delete_on_termination = false 2025-09-27 21:17:24.973849 | orchestrator | 21:17:24.973 STDOUT terraform:  + destination_type = "volume" 2025-09-27 21:17:24.973899 | orchestrator | 21:17:24.973 STDOUT terraform:  + multiattach = false 2025-09-27 21:17:24.973947 | orchestrator | 21:17:24.973 STDOUT terraform:  + source_type = "volume" 2025-09-27 21:17:24.974042 | orchestrator | 21:17:24.973 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 21:17:24.974070 | orchestrator | 21:17:24.974 STDOUT terraform:  } 2025-09-27 21:17:24.974094 | orchestrator | 21:17:24.974 STDOUT terraform:  + network { 2025-09-27 21:17:24.974139 | orchestrator | 21:17:24.974 STDOUT terraform:  + access_network = false 2025-09-27 21:17:24.974178 | orchestrator | 21:17:24.974 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-27 21:17:24.974231 | orchestrator | 21:17:24.974 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-27 21:17:24.974287 | orchestrator | 21:17:24.974 STDOUT terraform:  + mac = (known after apply) 2025-09-27 21:17:24.974326 | orchestrator | 21:17:24.974 STDOUT terraform:  + name = (known after apply) 2025-09-27 21:17:24.974378 | orchestrator | 21:17:24.974 STDOUT terraform:  + port = (known after apply) 2025-09-27 21:17:24.974433 | orchestrator | 21:17:24.974 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 21:17:24.974456 | orchestrator | 21:17:24.974 STDOUT terraform:  } 2025-09-27 21:17:24.974479 | orchestrator | 21:17:24.974 STDOUT terraform:  } 2025-09-27 21:17:24.974545 | orchestrator | 21:17:24.974 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-27 21:17:24.974610 | orchestrator | 21:17:24.974 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-27 21:17:24.974676 | orchestrator | 21:17:24.974 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-27 21:17:24.974737 | orchestrator | 21:17:24.974 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-27 21:17:24.974779 | orchestrator | 21:17:24.974 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-27 21:17:24.974841 | orchestrator | 21:17:24.974 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 21:17:24.974873 | orchestrator | 21:17:24.974 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.974919 | orchestrator | 21:17:24.974 STDOUT terraform:  + config_drive = true 2025-09-27 21:17:24.974978 | orchestrator | 21:17:24.974 STDOUT terraform:  + created = (known after apply) 2025-09-27 21:17:24.975022 | orchestrator | 21:17:24.974 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-27 21:17:24.975074 | orchestrator | 21:17:24.975 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-27 21:17:24.975108 | orchestrator | 21:17:24.975 STDOUT terraform:  + force_delete = false 2025-09-27 21:17:24.975164 | orchestrator | 21:17:24.975 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-27 21:17:24.975222 | orchestrator | 21:17:24.975 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.975268 | orchestrator | 21:17:24.975 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 21:17:24.975327 | orchestrator | 21:17:24.975 STDOUT terraform:  + image_name = (known after apply) 2025-09-27 21:17:24.975376 | orchestrator | 21:17:24.975 STDOUT terraform:  + key_pair = "testbed" 2025-09-27 21:17:24.975416 | orchestrator | 21:17:24.975 STDOUT terraform:  + name = "testbed-node-0" 2025-09-27 21:17:24.975462 | orchestrator | 21:17:24.975 STDOUT terraform:  + power_state = "active" 2025-09-27 21:17:24.975529 | orchestrator | 21:17:24.975 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.975575 | orchestrator | 21:17:24.975 STDOUT terraform:  + security_groups = (known after apply) 2025-09-27 21:17:24.975624 | orchestrator | 21:17:24.975 STDOUT terraform:  + stop_before_destroy = false 2025-09-27 21:17:24.975693 | orchestrator | 21:17:24.975 STDOUT terraform:  + updated = (known after apply) 2025-09-27 21:17:24.975767 | orchestrator | 21:17:24.975 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-27 21:17:24.975792 | orchestrator | 21:17:24.975 STDOUT terraform:  + block_device { 2025-09-27 21:17:24.975839 | orchestrator | 21:17:24.975 STDOUT terraform:  + boot_index = 0 2025-09-27 21:17:24.975877 | orchestrator | 21:17:24.975 STDOUT terraform:  + delete_on_termination = false 2025-09-27 21:17:24.975927 | orchestrator | 21:17:24.975 STDOUT terraform:  + destination_type = "volume" 2025-09-27 21:17:24.975963 | orchestrator | 21:17:24.975 STDOUT terraform:  + multiattach = false 2025-09-27 21:17:24.976015 | orchestrator | 21:17:24.975 STDOUT terraform:  + source_type = "volume" 2025-09-27 21:17:24.976091 | orchestrator | 21:17:24.976 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 21:17:24.976117 | orchestrator | 21:17:24.976 STDOUT terraform:  } 2025-09-27 21:17:24.976138 | orchestrator | 21:17:24.976 STDOUT terraform:  + network { 2025-09-27 21:17:24.976182 | orchestrator | 21:17:24.976 STDOUT terraform:  + access_network = false 2025-09-27 21:17:24.976220 | orchestrator | 21:17:24.976 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-27 21:17:24.976331 | orchestrator | 21:17:24.976 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-27 21:17:24.976386 | orchestrator | 21:17:24.976 STDOUT terraform:  + mac = (known after apply) 2025-09-27 21:17:24.976451 | orchestrator | 21:17:24.976 STDOUT terraform:  + name = (known after apply) 2025-09-27 21:17:24.976573 | orchestrator | 21:17:24.976 STDOUT terraform:  + port = (known after apply) 2025-09-27 21:17:24.976751 | orchestrator | 21:17:24.976 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 21:17:24.976837 | orchestrator | 21:17:24.976 STDOUT terraform:  } 2025-09-27 21:17:24.976926 | orchestrator | 21:17:24.976 STDOUT terraform:  } 2025-09-27 21:17:24.977339 | orchestrator | 21:17:24.976 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-27 21:17:24.977385 | orchestrator | 21:17:24.977 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-27 21:17:24.977424 | orchestrator | 21:17:24.977 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-27 21:17:24.977447 | orchestrator | 21:17:24.977 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-27 21:17:24.977525 | orchestrator | 21:17:24.977 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-27 21:17:24.977547 | orchestrator | 21:17:24.977 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 21:17:24.977574 | orchestrator | 21:17:24.977 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.977614 | orchestrator | 21:17:24.977 STDOUT terraform:  + config_drive = true 2025-09-27 21:17:24.977642 | orchestrator | 21:17:24.977 STDOUT terraform:  + created = (known after apply) 2025-09-27 21:17:24.977656 | orchestrator | 21:17:24.977 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-27 21:17:24.977660 | orchestrator | 21:17:24.977 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-27 21:17:24.977739 | orchestrator | 21:17:24.977 STDOUT terraform:  + force_delete = false 2025-09-27 21:17:24.977822 | orchestrator | 21:17:24.977 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-27 21:17:24.977857 | orchestrator | 21:17:24.977 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.977946 | orchestrator | 21:17:24.977 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 21:17:24.977952 | orchestrator | 21:17:24.977 STDOUT terraform:  + image_name = (known after apply) 2025-09-27 21:17:24.977970 | orchestrator | 21:17:24.977 STDOUT terraform:  + key_pair = "testbed" 2025-09-27 21:17:24.977975 | orchestrator | 21:17:24.977 STDOUT terraform:  + name = "testbed-node-1" 2025-09-27 21:17:24.977987 | orchestrator | 21:17:24.977 STDOUT terraform:  + power_state = "active" 2025-09-27 21:17:24.977991 | orchestrator | 21:17:24.977 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.978066 | orchestrator | 21:17:24.977 STDOUT terraform:  + security_groups = (known after apply) 2025-09-27 21:17:24.978078 | orchestrator | 21:17:24.977 STDOUT terraform:  + stop_before_destroy = false 2025-09-27 21:17:24.978173 | orchestrator | 21:17:24.977 STDOUT terraform:  + updated = (known after apply) 2025-09-27 21:17:24.978299 | orchestrator | 21:17:24.977 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-27 21:17:24.978312 | orchestrator | 21:17:24.978 STDOUT terraform:  + block_device { 2025-09-27 21:17:24.978325 | orchestrator | 21:17:24.978 STDOUT terraform:  + boot_index = 0 2025-09-27 21:17:24.978329 | orchestrator | 21:17:24.978 STDOUT terraform:  + delete_on_termination = false 2025-09-27 21:17:24.978340 | orchestrator | 21:17:24.978 STDOUT terraform:  + destination_type = "volume" 2025-09-27 21:17:24.978353 | orchestrator | 21:17:24.978 STDOUT terraform:  + multiattach = false 2025-09-27 21:17:24.978373 | orchestrator | 21:17:24.978 STDOUT terraform:  + source_type = "volume" 2025-09-27 21:17:24.978398 | orchestrator | 21:17:24.978 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 21:17:24.978448 | orchestrator | 21:17:24.978 STDOUT terraform:  } 2025-09-27 21:17:24.978502 | orchestrator | 21:17:24.978 STDOUT terraform:  + network { 2025-09-27 21:17:24.978508 | orchestrator | 21:17:24.978 STDOUT terraform:  + access_network = false 2025-09-27 21:17:24.978512 | orchestrator | 21:17:24.978 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-27 21:17:24.978516 | orchestrator | 21:17:24.978 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-27 21:17:24.978520 | orchestrator | 21:17:24.978 STDOUT terraform:  + mac = (known after apply) 2025-09-27 21:17:24.978524 | orchestrator | 21:17:24.978 STDOUT terraform:  + name = (known after apply) 2025-09-27 21:17:24.978531 | orchestrator | 21:17:24.978 STDOUT terraform:  + port = (known after apply) 2025-09-27 21:17:24.978537 | orchestrator | 21:17:24.978 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 21:17:24.978541 | orchestrator | 21:17:24.978 STDOUT terraform:  } 2025-09-27 21:17:24.978564 | orchestrator | 21:17:24.978 STDOUT terraform:  } 2025-09-27 21:17:24.978568 | orchestrator | 21:17:24.978 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-27 21:17:24.978572 | orchestrator | 21:17:24.978 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-27 21:17:24.978577 | orchestrator | 21:17:24.978 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-27 21:17:24.978583 | orchestrator | 21:17:24.978 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-27 21:17:24.978652 | orchestrator | 21:17:24.978 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-27 21:17:24.978658 | orchestrator | 21:17:24.978 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 21:17:24.978688 | orchestrator | 21:17:24.978 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.978704 | orchestrator | 21:17:24.978 STDOUT terraform:  + config_drive = true 2025-09-27 21:17:24.978733 | orchestrator | 21:17:24.978 STDOUT terraform:  + created = (known after apply) 2025-09-27 21:17:24.978781 | orchestrator | 21:17:24.978 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-27 21:17:24.978816 | orchestrator | 21:17:24.978 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-27 21:17:24.978828 | orchestrator | 21:17:24.978 STDOUT terraform:  + force_delete = false 2025-09-27 21:17:24.978917 | orchestrator | 21:17:24.978 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-27 21:17:24.978958 | orchestrator | 21:17:24.978 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.978990 | orchestrator | 21:17:24.978 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 21:17:24.979009 | orchestrator | 21:17:24.978 STDOUT terraform:  + image_name = (known after apply) 2025-09-27 21:17:24.979013 | orchestrator | 21:17:24.978 STDOUT terraform:  + key_pair = "testbed" 2025-09-27 21:17:24.979025 | orchestrator | 21:17:24.978 STDOUT terraform:  + name = "testbed-node-2" 2025-09-27 21:17:24.979073 | orchestrator | 21:17:24.978 STDOUT terraform:  + power_state = "active" 2025-09-27 21:17:24.979079 | orchestrator | 21:17:24.978 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.979096 | orchestrator | 21:17:24.979 STDOUT terraform:  + security_groups = (known after apply) 2025-09-27 21:17:24.979111 | orchestrator | 21:17:24.979 STDOUT terraform:  + stop_before_destroy = false 2025-09-27 21:17:24.979131 | orchestrator | 21:17:24.979 STDOUT terraform:  + updated = (known after apply) 2025-09-27 21:17:24.979150 | orchestrator | 21:17:24.979 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-27 21:17:24.979164 | orchestrator | 21:17:24.979 STDOUT terraform:  + block_device { 2025-09-27 21:17:24.979235 | orchestrator | 21:17:24.979 STDOUT terraform:  + boot_index = 0 2025-09-27 21:17:24.979247 | orchestrator | 21:17:24.979 STDOUT terraform:  + delete_on_termination = false 2025-09-27 21:17:24.979281 | orchestrator | 21:17:24.979 STDOUT terraform:  + destination_type = "volume" 2025-09-27 21:17:24.979300 | orchestrator | 21:17:24.979 STDOUT terraform:  + multiattach = false 2025-09-27 21:17:24.979310 | orchestrator | 21:17:24.979 STDOUT terraform:  + source_type = "volume" 2025-09-27 21:17:24.979317 | orchestrator | 21:17:24.979 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 21:17:24.979320 | orchestrator | 21:17:24.979 STDOUT terraform:  } 2025-09-27 21:17:24.979326 | orchestrator | 21:17:24.979 STDOUT terraform:  + network { 2025-09-27 21:17:24.979360 | orchestrator | 21:17:24.979 STDOUT terraform:  + access_network = false 2025-09-27 21:17:24.979381 | orchestrator | 21:17:24.979 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-27 21:17:24.979411 | orchestrator | 21:17:24.979 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-27 21:17:24.979445 | orchestrator | 21:17:24.979 STDOUT terraform:  + mac = (known after apply) 2025-09-27 21:17:24.979474 | orchestrator | 21:17:24.979 STDOUT terraform:  + name = (known after apply) 2025-09-27 21:17:24.979519 | orchestrator | 21:17:24.979 STDOUT terraform:  + port = (known after apply) 2025-09-27 21:17:24.979537 | orchestrator | 21:17:24.979 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 21:17:24.979543 | orchestrator | 21:17:24.979 STDOUT terraform:  } 2025-09-27 21:17:24.979559 | orchestrator | 21:17:24.979 STDOUT terraform:  } 2025-09-27 21:17:24.979603 | orchestrator | 21:17:24.979 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-27 21:17:24.979660 | orchestrator | 21:17:24.979 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-27 21:17:24.979695 | orchestrator | 21:17:24.979 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-27 21:17:24.979780 | orchestrator | 21:17:24.979 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-27 21:17:24.979786 | orchestrator | 21:17:24.979 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-27 21:17:24.979808 | orchestrator | 21:17:24.979 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 21:17:24.979822 | orchestrator | 21:17:24.979 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.979834 | orchestrator | 21:17:24.979 STDOUT terraform:  + config_drive = true 2025-09-27 21:17:24.979839 | orchestrator | 21:17:24.979 STDOUT terraform:  + created = (known after apply) 2025-09-27 21:17:24.979872 | orchestrator | 21:17:24.979 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-27 21:17:24.979915 | orchestrator | 21:17:24.979 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-27 21:17:24.979921 | orchestrator | 21:17:24.979 STDOUT terraform:  + force_delete = false 2025-09-27 21:17:24.979942 | orchestrator | 21:17:24.979 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-27 21:17:24.980001 | orchestrator | 21:17:24.979 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.980016 | orchestrator | 21:17:24.979 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 21:17:24.980044 | orchestrator | 21:17:24.980 STDOUT terraform:  + image_name = (known after apply) 2025-09-27 21:17:24.980057 | orchestrator | 21:17:24.980 STDOUT terraform:  + key_pair = "testbed" 2025-09-27 21:17:24.980111 | orchestrator | 21:17:24.980 STDOUT terraform:  + name = "testbed-node-3" 2025-09-27 21:17:24.980116 | orchestrator | 21:17:24.980 STDOUT terraform:  + power_state = "active" 2025-09-27 21:17:24.980150 | orchestrator | 21:17:24.980 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.980206 | orchestrator | 21:17:24.980 STDOUT terraform:  + security_groups = (known after apply) 2025-09-27 21:17:24.980234 | orchestrator | 21:17:24.980 STDOUT terraform:  + stop_before_destroy = false 2025-09-27 21:17:24.980253 | orchestrator | 21:17:24.980 STDOUT terraform:  + updated = (known after apply) 2025-09-27 21:17:24.980313 | orchestrator | 21:17:24.980 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-27 21:17:24.980342 | orchestrator | 21:17:24.980 STDOUT terraform:  + block_device { 2025-09-27 21:17:24.980394 | orchestrator | 21:17:24.980 STDOUT terraform:  + boot_index = 0 2025-09-27 21:17:24.980415 | orchestrator | 21:17:24.980 STDOUT terraform:  + delete_on_termination = false 2025-09-27 21:17:24.980426 | orchestrator | 21:17:24.980 STDOUT terraform:  + destination_type = "volume" 2025-09-27 21:17:24.980439 | orchestrator | 21:17:24.980 STDOUT terraform:  + multiattach = false 2025-09-27 21:17:24.980460 | orchestrator | 21:17:24.980 STDOUT terraform:  + source_type = "volume" 2025-09-27 21:17:24.980490 | orchestrator | 21:17:24.980 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 21:17:24.980502 | orchestrator | 21:17:24.980 STDOUT terraform:  } 2025-09-27 21:17:24.980513 | orchestrator | 21:17:24.980 STDOUT terraform:  + network { 2025-09-27 21:17:24.980533 | orchestrator | 21:17:24.980 STDOUT terraform:  + access_network = false 2025-09-27 21:17:24.980545 | orchestrator | 21:17:24.980 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-27 21:17:24.980558 | orchestrator | 21:17:24.980 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-27 21:17:24.980562 | orchestrator | 21:17:24.980 STDOUT terraform:  + mac = (known after apply) 2025-09-27 21:17:24.980623 | orchestrator | 21:17:24.980 STDOUT terraform:  + name = (known after apply) 2025-09-27 21:17:24.980628 | orchestrator | 21:17:24.980 STDOUT terraform:  + port = (known after apply) 2025-09-27 21:17:24.980640 | orchestrator | 21:17:24.980 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 21:17:24.980644 | orchestrator | 21:17:24.980 STDOUT terraform:  } 2025-09-27 21:17:24.980655 | orchestrator | 21:17:24.980 STDOUT terraform:  } 2025-09-27 21:17:24.980688 | orchestrator | 21:17:24.980 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-27 21:17:24.980722 | orchestrator | 21:17:24.980 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-27 21:17:24.980790 | orchestrator | 21:17:24.980 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-27 21:17:24.980807 | orchestrator | 21:17:24.980 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-27 21:17:24.980812 | orchestrator | 21:17:24.980 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-27 21:17:24.980842 | orchestrator | 21:17:24.980 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 21:17:24.980879 | orchestrator | 21:17:24.980 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.980886 | orchestrator | 21:17:24.980 STDOUT terraform:  + config_drive = true 2025-09-27 21:17:24.980914 | orchestrator | 21:17:24.980 STDOUT terraform:  + created = (known after apply) 2025-09-27 21:17:24.980963 | orchestrator | 21:17:24.980 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-27 21:17:24.980970 | orchestrator | 21:17:24.980 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-27 21:17:24.980998 | orchestrator | 21:17:24.980 STDOUT terraform:  + force_delete = false 2025-09-27 21:17:24.981046 | orchestrator | 21:17:24.980 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-27 21:17:24.981065 | orchestrator | 21:17:24.981 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.981099 | orchestrator | 21:17:24.981 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 21:17:24.981138 | orchestrator | 21:17:24.981 STDOUT terraform:  + image_name = (known after apply) 2025-09-27 21:17:24.981162 | orchestrator | 21:17:24.981 STDOUT terraform:  + key_pair = "testbed" 2025-09-27 21:17:24.981207 | orchestrator | 21:17:24.981 STDOUT terraform:  + name = "testbed-node-4" 2025-09-27 21:17:24.981213 | orchestrator | 21:17:24.981 STDOUT terraform:  + power_state = "active" 2025-09-27 21:17:24.981246 | orchestrator | 21:17:24.981 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.981291 | orchestrator | 21:17:24.981 STDOUT terraform:  + security_groups = (known after apply) 2025-09-27 21:17:24.981298 | orchestrator | 21:17:24.981 STDOUT terraform:  + stop_before_destroy = false 2025-09-27 21:17:24.981333 | orchestrator | 21:17:24.981 STDOUT terraform:  + updated = (known after apply) 2025-09-27 21:17:24.981381 | orchestrator | 21:17:24.981 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-27 21:17:24.981393 | orchestrator | 21:17:24.981 STDOUT terraform:  + block_device { 2025-09-27 21:17:24.981417 | orchestrator | 21:17:24.981 STDOUT terraform:  + boot_index = 0 2025-09-27 21:17:24.981455 | orchestrator | 21:17:24.981 STDOUT terraform:  + delete_on_termination = false 2025-09-27 21:17:24.981473 | orchestrator | 21:17:24.981 STDOUT terraform:  + destination_type = "volume" 2025-09-27 21:17:24.981505 | orchestrator | 21:17:24.981 STDOUT terraform:  + multiattach = false 2025-09-27 21:17:24.981540 | orchestrator | 21:17:24.981 STDOUT terraform:  + source_type = "volume" 2025-09-27 21:17:24.981569 | orchestrator | 21:17:24.981 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 21:17:24.981576 | orchestrator | 21:17:24.981 STDOUT terraform:  } 2025-09-27 21:17:24.981593 | orchestrator | 21:17:24.981 STDOUT terraform:  + network { 2025-09-27 21:17:24.981623 | orchestrator | 21:17:24.981 STDOUT terraform:  + access_network = false 2025-09-27 21:17:24.981642 | orchestrator | 21:17:24.981 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-27 21:17:24.981713 | orchestrator | 21:17:24.981 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-27 21:17:24.981721 | orchestrator | 21:17:24.981 STDOUT terraform:  + mac = (known after apply) 2025-09-27 21:17:24.981750 | orchestrator | 21:17:24.981 STDOUT terraform:  + name = (known after apply) 2025-09-27 21:17:24.981794 | orchestrator | 21:17:24.981 STDOUT terraform:  + port = (known after apply) 2025-09-27 21:17:24.981810 | orchestrator | 21:17:24.981 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 21:17:24.981816 | orchestrator | 21:17:24.981 STDOUT terraform:  } 2025-09-27 21:17:24.981835 | orchestrator | 21:17:24.981 STDOUT terraform:  } 2025-09-27 21:17:24.981877 | orchestrator | 21:17:24.981 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-27 21:17:24.981920 | orchestrator | 21:17:24.981 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-27 21:17:24.981952 | orchestrator | 21:17:24.981 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-27 21:17:24.981986 | orchestrator | 21:17:24.981 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-27 21:17:24.982033 | orchestrator | 21:17:24.981 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-27 21:17:24.982076 | orchestrator | 21:17:24.982 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 21:17:24.982115 | orchestrator | 21:17:24.982 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 21:17:24.982120 | orchestrator | 21:17:24.982 STDOUT terraform:  + config_drive = true 2025-09-27 21:17:24.982151 | orchestrator | 21:17:24.982 STDOUT terraform:  + created = (known after apply) 2025-09-27 21:17:24.982199 | orchestrator | 21:17:24.982 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-27 21:17:24.982210 | orchestrator | 21:17:24.982 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-27 21:17:24.982235 | orchestrator | 21:17:24.982 STDOUT terraform:  + force_delete = false 2025-09-27 21:17:24.982281 | orchestrator | 21:17:24.982 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-27 21:17:24.982301 | orchestrator | 21:17:24.982 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.982335 | orchestrator | 21:17:24.982 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 21:17:24.982370 | orchestrator | 21:17:24.982 STDOUT terraform:  + image_name = (known after apply) 2025-09-27 21:17:24.982393 | orchestrator | 21:17:24.982 STDOUT terraform:  + key_pair = "testbed" 2025-09-27 21:17:24.982423 | orchestrator | 21:17:24.982 STDOUT terraform:  + name = "testbed-node-5" 2025-09-27 21:17:24.982444 | orchestrator | 21:17:24.982 STDOUT terraform:  + power_state = "active" 2025-09-27 21:17:24.982476 | orchestrator | 21:17:24.982 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.982522 | orchestrator | 21:17:24.982 STDOUT terraform:  + security_groups = (known after apply) 2025-09-27 21:17:24.982529 | orchestrator | 21:17:24.982 STDOUT terraform:  + stop_before_destroy = false 2025-09-27 21:17:24.982570 | orchestrator | 21:17:24.982 STDOUT terraform:  + updated = (known after apply) 2025-09-27 21:17:24.982619 | orchestrator | 21:17:24.982 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-27 21:17:24.982630 | orchestrator | 21:17:24.982 STDOUT terraform:  + block_device { 2025-09-27 21:17:24.982655 | orchestrator | 21:17:24.982 STDOUT terraform:  + boot_index = 0 2025-09-27 21:17:24.982704 | orchestrator | 21:17:24.982 STDOUT terraform:  + delete_on_termination = false 2025-09-27 21:17:24.982713 | orchestrator | 21:17:24.982 STDOUT terraform:  + destination_type = "volume" 2025-09-27 21:17:24.982744 | orchestrator | 21:17:24.982 STDOUT terraform:  + multiattach = false 2025-09-27 21:17:24.982785 | orchestrator | 21:17:24.982 STDOUT terraform:  + source_type = "volume" 2025-09-27 21:17:24.982812 | orchestrator | 21:17:24.982 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 21:17:24.982823 | orchestrator | 21:17:24.982 STDOUT terraform:  } 2025-09-27 21:17:24.982839 | orchestrator | 21:17:24.982 STDOUT terraform:  + network { 2025-09-27 21:17:24.982869 | orchestrator | 21:17:24.982 STDOUT terraform:  + access_network = false 2025-09-27 21:17:24.982890 | orchestrator | 21:17:24.982 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-27 21:17:24.982920 | orchestrator | 21:17:24.982 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-27 21:17:24.982951 | orchestrator | 21:17:24.982 STDOUT terraform:  + mac = (known after apply) 2025-09-27 21:17:24.982981 | orchestrator | 21:17:24.982 STDOUT terraform:  + name = (known after apply) 2025-09-27 21:17:24.983026 | orchestrator | 21:17:24.982 STDOUT terraform:  + port = (known after apply) 2025-09-27 21:17:24.983042 | orchestrator | 21:17:24.983 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 21:17:24.983049 | orchestrator | 21:17:24.983 STDOUT terraform:  } 2025-09-27 21:17:24.983066 | orchestrator | 21:17:24.983 STDOUT terraform:  } 2025-09-27 21:17:24.983109 | orchestrator | 21:17:24.983 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-27 21:17:24.983134 | orchestrator | 21:17:24.983 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-27 21:17:24.983161 | orchestrator | 21:17:24.983 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-27 21:17:24.983191 | orchestrator | 21:17:24.983 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.983198 | orchestrator | 21:17:24.983 STDOUT terraform:  + name = "testbed" 2025-09-27 21:17:24.983224 | orchestrator | 21:17:24.983 STDOUT terraform:  + private_key = (sensitive value) 2025-09-27 21:17:24.983252 | orchestrator | 21:17:24.983 STDOUT terraform:  + public_key = (known after apply) 2025-09-27 21:17:24.983273 | orchestrator | 21:17:24.983 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.983304 | orchestrator | 21:17:24.983 STDOUT terraform:  + user_id = (known after apply) 2025-09-27 21:17:24.983311 | orchestrator | 21:17:24.983 STDOUT terraform:  } 2025-09-27 21:17:24.983385 | orchestrator | 21:17:24.983 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-27 21:17:24.983408 | orchestrator | 21:17:24.983 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-27 21:17:24.983439 | orchestrator | 21:17:24.983 STDOUT terraform:  + device = (known after apply) 2025-09-27 21:17:24.983462 | orchestrator | 21:17:24.983 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.983490 | orchestrator | 21:17:24.983 STDOUT terraform:  + instance_id = (known after apply) 2025-09-27 21:17:24.983547 | orchestrator | 21:17:24.983 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.983552 | orchestrator | 21:17:24.983 STDOUT terraform:  + volume_id = (known after apply) 2025-09-27 21:17:24.983557 | orchestrator | 21:17:24.983 STDOUT terraform:  } 2025-09-27 21:17:24.983597 | orchestrator | 21:17:24.983 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-27 21:17:24.983644 | orchestrator | 21:17:24.983 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-27 21:17:24.983728 | orchestrator | 21:17:24.983 STDOUT terraform:  + device = (known after apply) 2025-09-27 21:17:24.983741 | orchestrator | 21:17:24.983 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.983745 | orchestrator | 21:17:24.983 STDOUT terraform:  + instance_id = (known after apply) 2025-09-27 21:17:24.983750 | orchestrator | 21:17:24.983 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.983773 | orchestrator | 21:17:24.983 STDOUT terraform:  + volume_id = (known after apply) 2025-09-27 21:17:24.983777 | orchestrator | 21:17:24.983 STDOUT terraform:  } 2025-09-27 21:17:24.983826 | orchestrator | 21:17:24.983 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-27 21:17:24.983865 | orchestrator | 21:17:24.983 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-27 21:17:24.983892 | orchestrator | 21:17:24.983 STDOUT terraform:  + device = (known after apply) 2025-09-27 21:17:24.983919 | orchestrator | 21:17:24.983 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.983945 | orchestrator | 21:17:24.983 STDOUT terraform:  + instance_id = (known after apply) 2025-09-27 21:17:24.983973 | orchestrator | 21:17:24.983 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.984000 | orchestrator | 21:17:24.983 STDOUT terraform:  + volume_id = (known after apply) 2025-09-27 21:17:24.984005 | orchestrator | 21:17:24.983 STDOUT terraform:  } 2025-09-27 21:17:24.984056 | orchestrator | 21:17:24.983 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-27 21:17:24.984107 | orchestrator | 21:17:24.984 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-27 21:17:24.984113 | orchestrator | 21:17:24.984 STDOUT terraform:  + device = (known after apply) 2025-09-27 21:17:24.984187 | orchestrator | 21:17:24.984 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.984192 | orchestrator | 21:17:24.984 STDOUT terraform:  + instance_id = (known after apply) 2025-09-27 21:17:24.984196 | orchestrator | 21:17:24.984 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.984234 | orchestrator | 21:17:24.984 STDOUT terraform:  + volume_id = (known after apply) 2025-09-27 21:17:24.984273 | orchestrator | 21:17:24.984 STDOUT terraform:  } 2025-09-27 21:17:24.984304 | orchestrator | 21:17:24.984 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-27 21:17:24.984352 | orchestrator | 21:17:24.984 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-27 21:17:24.984357 | orchestrator | 21:17:24.984 STDOUT terraform:  + device = (known after apply) 2025-09-27 21:17:24.984362 | orchestrator | 21:17:24.984 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.984401 | orchestrator | 21:17:24.984 STDOUT terraform:  + instance_id = (known after apply) 2025-09-27 21:17:24.984414 | orchestrator | 21:17:24.984 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.984435 | orchestrator | 21:17:24.984 STDOUT terraform:  + volume_id = (known after apply) 2025-09-27 21:17:24.984440 | orchestrator | 21:17:24.984 STDOUT terraform:  } 2025-09-27 21:17:24.984483 | orchestrator | 21:17:24.984 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-27 21:17:24.984532 | orchestrator | 21:17:24.984 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-27 21:17:24.984551 | orchestrator | 21:17:24.984 STDOUT terraform:  + device = (known after apply) 2025-09-27 21:17:24.984573 | orchestrator | 21:17:24.984 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.984603 | orchestrator | 21:17:24.984 STDOUT terraform:  + instance_id = (known after apply) 2025-09-27 21:17:24.984642 | orchestrator | 21:17:24.984 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.984705 | orchestrator | 21:17:24.984 STDOUT terraform:  + volume_id = (known after apply) 2025-09-27 21:17:24.984720 | orchestrator | 21:17:24.984 STDOUT terraform:  } 2025-09-27 21:17:24.984732 | orchestrator | 21:17:24.984 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-27 21:17:24.984781 | orchestrator | 21:17:24.984 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-27 21:17:24.984806 | orchestrator | 21:17:24.984 STDOUT terraform:  + device = (known after apply) 2025-09-27 21:17:24.984830 | orchestrator | 21:17:24.984 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.984857 | orchestrator | 21:17:24.984 STDOUT terraform:  + instance_id = (known after apply) 2025-09-27 21:17:24.984870 | orchestrator | 21:17:24.984 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.984931 | orchestrator | 21:17:24.984 STDOUT terraform:  + volume_id = (known after apply) 2025-09-27 21:17:24.984938 | orchestrator | 21:17:24.984 STDOUT terraform:  } 2025-09-27 21:17:24.984943 | orchestrator | 21:17:24.984 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-27 21:17:24.985022 | orchestrator | 21:17:24.984 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-27 21:17:24.985028 | orchestrator | 21:17:24.984 STDOUT terraform:  + device = (known after apply) 2025-09-27 21:17:24.985092 | orchestrator | 21:17:24.985 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.985122 | orchestrator | 21:17:24.985 STDOUT terraform:  + instance_id = (known after apply) 2025-09-27 21:17:24.985129 | orchestrator | 21:17:24.985 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.985132 | orchestrator | 21:17:24.985 STDOUT terraform:  + volume_id = (known after apply) 2025-09-27 21:17:24.985136 | orchestrator | 21:17:24.985 STDOUT terraform:  } 2025-09-27 21:17:24.985171 | orchestrator | 21:17:24.985 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-27 21:17:24.985210 | orchestrator | 21:17:24.985 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-27 21:17:24.985246 | orchestrator | 21:17:24.985 STDOUT terraform:  + device = (known after apply) 2025-09-27 21:17:24.985264 | orchestrator | 21:17:24.985 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.985313 | orchestrator | 21:17:24.985 STDOUT terraform:  + instance_id = (known after apply) 2025-09-27 21:17:24.985317 | orchestrator | 21:17:24.985 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.985334 | orchestrator | 21:17:24.985 STDOUT terraform:  + volume_id = (known after apply) 2025-09-27 21:17:24.985364 | orchestrator | 21:17:24.985 STDOUT terraform:  } 2025-09-27 21:17:24.985405 | orchestrator | 21:17:24.985 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-27 21:17:24.985478 | orchestrator | 21:17:24.985 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-27 21:17:24.985510 | orchestrator | 21:17:24.985 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-27 21:17:24.985516 | orchestrator | 21:17:24.985 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-27 21:17:24.985520 | orchestrator | 21:17:24.985 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.985546 | orchestrator | 21:17:24.985 STDOUT terraform:  + port_id = (known after apply) 2025-09-27 21:17:24.985560 | orchestrator | 21:17:24.985 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.985565 | orchestrator | 21:17:24.985 STDOUT terraform:  } 2025-09-27 21:17:24.985642 | orchestrator | 21:17:24.985 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-27 21:17:24.985814 | orchestrator | 21:17:24.985 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-27 21:17:24.985842 | orchestrator | 21:17:24.985 STDOUT terraform:  + address = (known after apply) 2025-09-27 21:17:24.985874 | orchestrator | 21:17:24.985 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 21:17:24.985885 | orchestrator | 21:17:24.985 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-27 21:17:24.985890 | orchestrator | 21:17:24.985 STDOUT terraform:  + dns_name = (known after apply) 2025-09-27 21:17:24.985894 | orchestrator | 21:17:24.985 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-27 21:17:24.985912 | orchestrator | 21:17:24.985 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.985932 | orchestrator | 21:17:24.985 STDOUT terraform:  + pool = "public" 2025-09-27 21:17:24.985937 | orchestrator | 21:17:24.985 STDOUT terraform:  + port_id = (known after apply) 2025-09-27 21:17:24.985941 | orchestrator | 21:17:24.985 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.985945 | orchestrator | 21:17:24.985 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-27 21:17:24.985975 | orchestrator | 21:17:24.985 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:24.985987 | orchestrator | 21:17:24.985 STDOUT terraform:  } 2025-09-27 21:17:24.985993 | orchestrator | 21:17:24.985 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-27 21:17:24.985997 | orchestrator | 21:17:24.985 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-27 21:17:24.986001 | orchestrator | 21:17:24.985 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-27 21:17:24.986055 | orchestrator | 21:17:24.985 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 21:17:24.986081 | orchestrator | 21:17:24.986 STDOUT terraform:  + availability_zone_hints = [ 2025-09-27 21:17:24.986085 | orchestrator | 21:17:24.986 STDOUT terraform:  + "nova", 2025-09-27 21:17:24.986089 | orchestrator | 21:17:24.986 STDOUT terraform:  ] 2025-09-27 21:17:24.986122 | orchestrator | 21:17:24.986 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-27 21:17:24.986157 | orchestrator | 21:17:24.986 STDOUT terraform:  + external = (known after apply) 2025-09-27 21:17:24.986211 | orchestrator | 21:17:24.986 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.986268 | orchestrator | 21:17:24.986 STDOUT terraform:  + mtu = (known after apply) 2025-09-27 21:17:24.986339 | orchestrator | 21:17:24.986 STDOUT terraform:  + name = "net-testbed-management" 2025-09-27 21:17:24.986344 | orchestrator | 21:17:24.986 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-27 21:17:24.986369 | orchestrator | 21:17:24.986 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-27 21:17:24.986452 | orchestrator | 21:17:24.986 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.986458 | orchestrator | 21:17:24.986 STDOUT terraform:  + shared = (known after apply) 2025-09-27 21:17:24.986463 | orchestrator | 21:17:24.986 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:24.986498 | orchestrator | 21:17:24.986 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-27 21:17:24.986568 | orchestrator | 21:17:24.986 STDOUT terraform:  + segments (known after apply) 2025-09-27 21:17:24.986663 | orchestrator | 21:17:24.986 STDOUT terraform:  } 2025-09-27 21:17:24.986667 | orchestrator | 21:17:24.986 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-27 21:17:24.986700 | orchestrator | 21:17:24.986 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-27 21:17:24.986715 | orchestrator | 21:17:24.986 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-27 21:17:24.986720 | orchestrator | 21:17:24.986 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-27 21:17:24.986725 | orchestrator | 21:17:24.986 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-27 21:17:24.986756 | orchestrator | 21:17:24.986 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 21:17:24.986787 | orchestrator | 21:17:24.986 STDOUT terraform:  + device_id = (known after apply) 2025-09-27 21:17:24.986854 | orchestrator | 21:17:24.986 STDOUT terraform:  + device_owner = (known after apply) 2025-09-27 21:17:24.986886 | orchestrator | 21:17:24.986 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-27 21:17:24.986891 | orchestrator | 21:17:24.986 STDOUT terraform:  + dns_name = (known after apply) 2025-09-27 21:17:24.986926 | orchestrator | 21:17:24.986 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.986956 | orchestrator | 21:17:24.986 STDOUT terraform:  + mac_address = (known after apply) 2025-09-27 21:17:24.987023 | orchestrator | 21:17:24.986 STDOUT terraform:  + network_id = (known after apply) 2025-09-27 21:17:24.987030 | orchestrator | 21:17:24.986 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-27 21:17:24.987111 | orchestrator | 21:17:24.987 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-27 21:17:24.987185 | orchestrator | 21:17:24.987 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.987287 | orchestrator | 21:17:24.987 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-27 21:17:24.987330 | orchestrator | 21:17:24.987 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:24.987399 | orchestrator | 21:17:24.987 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 21:17:24.987457 | orchestrator | 21:17:24.987 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-09-27 21:17:24.987503 | orchestrator | 21:17:24.987 STDOUT terraform:  } 2025-09-27 21:17:24.987516 | orchestrator | 21:17:24.987 STDOUT terraform:  + binding (known after apply) 2025-09-27 21:17:24.987554 | orchestrator | 21:17:24.987 STDOUT terraform:  + fixed_ip { 2025-09-27 21:17:24.987599 | orchestrator | 21:17:24.987 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-27 21:17:24.987619 | orchestrator | 21:17:24.987 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-27 21:17:24.987633 | orchestrator | 21:17:24.987 STDOUT terraform:  } 2025-09-27 21:17:24.987644 | orchestrator | 21:17:24.987 STDOUT terraform:  } 2025-09-27 21:17:24.987657 | orchestrator | 21:17:24.987 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-27 21:17:24.987702 | orchestrator | 21:17:24.987 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-27 21:17:24.987707 | orchestrator | 21:17:24.987 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-27 21:17:24.987718 | orchestrator | 21:17:24.987 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-27 21:17:24.987735 | orchestrator | 21:17:24.987 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-27 21:17:24.987747 | orchestrator | 21:17:24.987 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 21:17:24.987763 | orchestrator | 21:17:24.987 STDOUT terraform:  + device_id = (known after apply) 2025-09-27 21:17:24.987773 | orchestrator | 21:17:24.987 STDOUT terraform:  + device_owner = (known after apply) 2025-09-27 21:17:24.987786 | orchestrator | 21:17:24.987 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-27 21:17:24.987797 | orchestrator | 21:17:24.987 STDOUT terraform:  + dns_name = (known after apply) 2025-09-27 21:17:24.987807 | orchestrator | 21:17:24.987 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.987821 | orchestrator | 21:17:24.987 STDOUT terraform:  + mac_address = (known after apply) 2025-09-27 21:17:24.987831 | orchestrator | 21:17:24.987 STDOUT terraform:  + network_id = (known after apply) 2025-09-27 21:17:24.987844 | orchestrator | 21:17:24.987 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-27 21:17:24.987854 | orchestrator | 21:17:24.987 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-27 21:17:24.987864 | orchestrator | 21:17:24.987 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.987877 | orchestrator | 21:17:24.987 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-27 21:17:24.987881 | orchestrator | 21:17:24.987 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:24.987901 | orchestrator | 21:17:24.987 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 21:17:24.988013 | orchestrator | 21:17:24.987 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-09-27 21:17:24.988026 | orchestrator | 21:17:24.987 STDOUT terraform:  } 2025-09-27 21:17:24.988036 | orchestrator | 21:17:24.987 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 21:17:24.988047 | orchestrator | 21:17:24.987 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-09-27 21:17:24.988051 | orchestrator | 21:17:24.987 STDOUT terraform:  } 2025-09-27 21:17:24.988071 | orchestrator | 21:17:24.987 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 21:17:24.988082 | orchestrator | 21:17:24.987 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-09-27 21:17:24.988086 | orchestrator | 21:17:24.987 STDOUT terraform:  } 2025-09-27 21:17:24.988098 | orchestrator | 21:17:24.987 STDOUT terraform:  + binding (known after apply) 2025-09-27 21:17:24.988108 | orchestrator | 21:17:24.988 STDOUT terraform:  + fixed_ip { 2025-09-27 21:17:24.988120 | orchestrator | 21:17:24.988 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-27 21:17:24.988133 | orchestrator | 21:17:24.988 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-27 21:17:24.988144 | orchestrator | 21:17:24.988 STDOUT terraform:  } 2025-09-27 21:17:24.988155 | orchestrator | 21:17:24.988 STDOUT terraform:  } 2025-09-27 21:17:24.988165 | orchestrator | 21:17:24.988 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-27 21:17:24.988178 | orchestrator | 21:17:24.988 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-27 21:17:24.988227 | orchestrator | 21:17:24.988 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-27 21:17:24.988280 | orchestrator | 21:17:24.988 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-27 21:17:24.988322 | orchestrator | 21:17:24.988 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-27 21:17:24.988453 | orchestrator | 21:17:24.988 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 21:17:24.988519 | orchestrator | 21:17:24.988 STDOUT terraform:  + device_id = (known after apply) 2025-09-27 21:17:24.988547 | orchestrator | 21:17:24.988 STDOUT terraform:  + device_owner = (known after apply) 2025-09-27 21:17:24.988580 | orchestrator | 21:17:24.988 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-27 21:17:24.988631 | orchestrator | 21:17:24.988 STDOUT terraform:  + dns_name = (known after apply) 2025-09-27 21:17:24.988693 | orchestrator | 21:17:24.988 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.988751 | orchestrator | 21:17:24.988 STDOUT terraform:  + mac_address = (known after apply) 2025-09-27 21:17:24.988825 | orchestrator | 21:17:24.988 STDOUT terraform:  + network_id = (known after apply) 2025-09-27 21:17:24.988874 | orchestrator | 21:17:24.988 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-27 21:17:24.988913 | orchestrator | 21:17:24.988 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-27 21:17:24.988924 | orchestrator | 21:17:24.988 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.989074 | orchestrator | 21:17:24.988 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-27 21:17:24.989078 | orchestrator | 21:17:24.988 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:24.989094 | orchestrator | 21:17:24.988 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 21:17:24.989105 | orchestrator | 21:17:24.988 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-09-27 21:17:24.989109 | orchestrator | 21:17:24.988 STDOUT terraform:  } 2025-09-27 21:17:24.989129 | orchestrator | 21:17:24.988 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 21:17:24.989133 | orchestrator | 21:17:24.988 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-09-27 21:17:24.989143 | orchestrator | 21:17:24.988 STDOUT terraform:  } 2025-09-27 21:17:24.989161 | orchestrator | 21:17:24.988 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 21:17:24.989165 | orchestrator | 21:17:24.988 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-09-27 21:17:24.989169 | orchestrator | 21:17:24.988 STDOUT terraform:  } 2025-09-27 21:17:24.989204 | orchestrator | 21:17:24.988 STDOUT terraform:  + binding (known after apply) 2025-09-27 21:17:24.989218 | orchestrator | 21:17:24.988 STDOUT terraform:  + fixed_ip { 2025-09-27 21:17:24.989229 | orchestrator | 21:17:24.988 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-27 21:17:24.989263 | orchestrator | 21:17:24.988 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-27 21:17:24.989290 | orchestrator | 21:17:24.988 STDOUT terraform:  } 2025-09-27 21:17:24.989392 | orchestrator | 21:17:24.988 STDOUT terraform:  } 2025-09-27 21:17:24.989440 | orchestrator | 21:17:24.988 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-27 21:17:24.989476 | orchestrator | 21:17:24.988 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-27 21:17:24.989488 | orchestrator | 21:17:24.988 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-27 21:17:24.989525 | orchestrator | 21:17:24.989 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-27 21:17:24.989548 | orchestrator | 21:17:24.989 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-27 21:17:24.989627 | orchestrator | 21:17:24.989 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 21:17:24.989662 | orchestrator | 21:17:24.989 STDOUT terraform:  + device_id = (known after apply) 2025-09-27 21:17:24.989725 | orchestrator | 21:17:24.989 STDOUT terraform:  + device_owner = (known after apply) 2025-09-27 21:17:24.989761 | orchestrator | 21:17:24.989 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-27 21:17:24.989781 | orchestrator | 21:17:24.989 STDOUT terraform:  + dns_name = (known after apply) 2025-09-27 21:17:24.989808 | orchestrator | 21:17:24.989 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.989820 | orchestrator | 21:17:24.989 STDOUT terraform:  + mac_address = (known after apply) 2025-09-27 21:17:24.989841 | orchestrator | 21:17:24.989 STDOUT terraform:  + network_id = (known after apply) 2025-09-27 21:17:24.989890 | orchestrator | 21:17:24.989 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-27 21:17:24.989916 | orchestrator | 21:17:24.989 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-27 21:17:24.989938 | orchestrator | 21:17:24.989 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.989975 | orchestrator | 21:17:24.989 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-27 21:17:24.989986 | orchestrator | 21:17:24.989 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:24.989999 | orchestrator | 21:17:24.989 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 21:17:24.990003 | orchestrator | 21:17:24.989 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-09-27 21:17:24.990027 | orchestrator | 21:17:24.989 STDOUT terraform:  } 2025-09-27 21:17:24.990068 | orchestrator | 21:17:24.989 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 21:17:24.990073 | orchestrator | 21:17:24.989 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-09-27 21:17:24.990076 | orchestrator | 21:17:24.989 STDOUT terraform:  } 2025-09-27 21:17:24.990080 | orchestrator | 21:17:24.989 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 21:17:24.990084 | orchestrator | 21:17:24.989 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-09-27 21:17:24.990088 | orchestrator | 21:17:24.989 STDOUT terraform:  } 2025-09-27 21:17:24.990155 | orchestrator | 21:17:24.989 STDOUT terraform:  + binding (known after apply) 2025-09-27 21:17:24.990168 | orchestrator | 21:17:24.989 STDOUT terraform:  + fixed_ip { 2025-09-27 21:17:24.990178 | orchestrator | 21:17:24.989 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-27 21:17:24.990196 | orchestrator | 21:17:24.989 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-27 21:17:24.990200 | orchestrator | 21:17:24.989 STDOUT terraform:  } 2025-09-27 21:17:24.990216 | orchestrator | 21:17:24.989 STDOUT terraform:  } 2025-09-27 21:17:24.990220 | orchestrator | 21:17:24.989 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-27 21:17:24.990233 | orchestrator | 21:17:24.989 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-27 21:17:24.990250 | orchestrator | 21:17:24.989 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-27 21:17:24.990254 | orchestrator | 21:17:24.989 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-27 21:17:24.990270 | orchestrator | 21:17:24.989 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-27 21:17:24.990275 | orchestrator | 21:17:24.989 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 21:17:24.990285 | orchestrator | 21:17:24.989 STDOUT terraform:  + device_id = (known after apply) 2025-09-27 21:17:24.990302 | orchestrator | 21:17:24.989 STDOUT terraform:  + device_owner = (known after apply) 2025-09-27 21:17:24.990306 | orchestrator | 21:17:24.989 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-27 21:17:24.990323 | orchestrator | 21:17:24.990 STDOUT terraform:  + dns_name = (known after apply) 2025-09-27 21:17:24.990327 | orchestrator | 21:17:24.990 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.990409 | orchestrator | 21:17:24.990 STDOUT terraform:  + mac_address = (known after apply) 2025-09-27 21:17:24.990414 | orchestrator | 21:17:24.990 STDOUT terraform:  + network_id = (known after apply) 2025-09-27 21:17:24.990417 | orchestrator | 21:17:24.990 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-27 21:17:24.990428 | orchestrator | 21:17:24.990 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-27 21:17:24.990446 | orchestrator | 21:17:24.990 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.990450 | orchestrator | 21:17:24.990 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-27 21:17:24.990462 | orchestrator | 21:17:24.990 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:24.990472 | orchestrator | 21:17:24.990 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 21:17:24.990500 | orchestrator | 21:17:24.990 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-09-27 21:17:24.990514 | orchestrator | 21:17:24.990 STDOUT terraform:  } 2025-09-27 21:17:24.990518 | orchestrator | 21:17:24.990 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 21:17:24.990562 | orchestrator | 21:17:24.990 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-09-27 21:17:24.990576 | orchestrator | 21:17:24.990 STDOUT terraform:  } 2025-09-27 21:17:24.990587 | orchestrator | 21:17:24.990 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 21:17:24.990598 | orchestrator | 21:17:24.990 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-09-27 21:17:24.990608 | orchestrator | 21:17:24.990 STDOUT terraform:  } 2025-09-27 21:17:24.990619 | orchestrator | 21:17:24.990 STDOUT terraform:  + binding (known after apply) 2025-09-27 21:17:24.990630 | orchestrator | 21:17:24.990 STDOUT terraform:  + fixed_ip { 2025-09-27 21:17:24.990634 | orchestrator | 21:17:24.990 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-27 21:17:24.990638 | orchestrator | 21:17:24.990 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-27 21:17:24.990661 | orchestrator | 21:17:24.990 STDOUT terraform:  } 2025-09-27 21:17:24.990665 | orchestrator | 21:17:24.990 STDOUT terraform:  } 2025-09-27 21:17:24.990695 | orchestrator | 21:17:24.990 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-27 21:17:24.990699 | orchestrator | 21:17:24.990 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-27 21:17:24.990703 | orchestrator | 21:17:24.990 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-27 21:17:24.990715 | orchestrator | 21:17:24.990 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-27 21:17:24.990757 | orchestrator | 21:17:24.990 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-27 21:17:24.990814 | orchestrator | 21:17:24.990 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 21:17:24.990864 | orchestrator | 21:17:24.990 STDOUT terraform:  + device_id = (known after apply) 2025-09-27 21:17:24.990868 | orchestrator | 21:17:24.990 STDOUT terraform:  + device_owner = (known after apply) 2025-09-27 21:17:24.990876 | orchestrator | 21:17:24.990 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-27 21:17:24.990952 | orchestrator | 21:17:24.990 STDOUT terraform:  + dns_name = (known after apply) 2025-09-27 21:17:24.990992 | orchestrator | 21:17:24.990 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.991037 | orchestrator | 21:17:24.990 STDOUT terraform:  + mac_address = (known after apply) 2025-09-27 21:17:24.991058 | orchestrator | 21:17:24.990 STDOUT terraform:  + network_id = (known after apply) 2025-09-27 21:17:24.991083 | orchestrator | 21:17:24.990 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-27 21:17:24.991095 | orchestrator | 21:17:24.991 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-27 21:17:24.991106 | orchestrator | 21:17:24.991 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.991123 | orchestrator | 21:17:24.991 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-27 21:17:24.991172 | orchestrator | 21:17:24.991 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:24.991176 | orchestrator | 21:17:24.991 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 21:17:24.991182 | orchestrator | 21:17:24.991 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-09-27 21:17:24.991208 | orchestrator | 21:17:24.991 STDOUT terraform:  } 2025-09-27 21:17:24.991259 | orchestrator | 21:17:24.991 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 21:17:24.991297 | orchestrator | 21:17:24.991 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-09-27 21:17:24.991301 | orchestrator | 21:17:24.991 STDOUT terraform:  } 2025-09-27 21:17:24.991305 | orchestrator | 21:17:24.991 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 21:17:24.991310 | orchestrator | 21:17:24.991 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-09-27 21:17:24.991314 | orchestrator | 21:17:24.991 STDOUT terraform:  } 2025-09-27 21:17:24.991388 | orchestrator | 21:17:24.991 STDOUT terraform:  + binding (known after apply) 2025-09-27 21:17:24.991394 | orchestrator | 21:17:24.991 STDOUT terraform:  + fixed_ip { 2025-09-27 21:17:24.991397 | orchestrator | 21:17:24.991 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-27 21:17:24.991401 | orchestrator | 21:17:24.991 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-27 21:17:24.991407 | orchestrator | 21:17:24.991 STDOUT terraform:  } 2025-09-27 21:17:24.991421 | orchestrator | 21:17:24.991 STDOUT terraform:  } 2025-09-27 21:17:24.991440 | orchestrator | 21:17:24.991 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-27 21:17:24.991510 | orchestrator | 21:17:24.991 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-27 21:17:24.991524 | orchestrator | 21:17:24.991 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-27 21:17:24.991544 | orchestrator | 21:17:24.991 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-27 21:17:24.991593 | orchestrator | 21:17:24.991 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-27 21:17:24.991620 | orchestrator | 21:17:24.991 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 21:17:24.991630 | orchestrator | 21:17:24.991 STDOUT terraform:  + device_id = (known after apply) 2025-09-27 21:17:24.991691 | orchestrator | 21:17:24.991 STDOUT terraform:  + device_owner = (known after apply) 2025-09-27 21:17:24.991740 | orchestrator | 21:17:24.991 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-27 21:17:24.991769 | orchestrator | 21:17:24.991 STDOUT terraform:  + dns_name = (known after apply) 2025-09-27 21:17:24.991791 | orchestrator | 21:17:24.991 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.991796 | orchestrator | 21:17:24.991 STDOUT terraform:  + mac_address = (known after apply) 2025-09-27 21:17:24.991835 | orchestrator | 21:17:24.991 STDOUT terraform:  + network_id = (known after apply) 2025-09-27 21:17:24.991896 | orchestrator | 21:17:24.991 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-27 21:17:24.991910 | orchestrator | 21:17:24.991 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-27 21:17:24.991963 | orchestrator | 21:17:24.991 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.992029 | orchestrator | 21:17:24.991 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-27 21:17:24.992047 | orchestrator | 21:17:24.991 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:24.992075 | orchestrator | 21:17:24.991 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 21:17:24.992108 | orchestrator | 21:17:24.991 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-09-27 21:17:24.992158 | orchestrator | 21:17:24.992 STDOUT terraform:  } 2025-09-27 21:17:24.992170 | orchestrator | 21:17:24.992 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 21:17:24.992185 | orchestrator | 21:17:24.992 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-09-27 21:17:24.992190 | orchestrator | 21:17:24.992 STDOUT terraform:  } 2025-09-27 21:17:24.992202 | orchestrator | 21:17:24.992 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 21:17:24.992219 | orchestrator | 21:17:24.992 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-09-27 21:17:24.992225 | orchestrator | 21:17:24.992 STDOUT terraform:  } 2025-09-27 21:17:24.992242 | orchestrator | 21:17:24.992 STDOUT terraform:  + binding (known after apply) 2025-09-27 21:17:24.992246 | orchestrator | 21:17:24.992 STDOUT terraform:  + fixed_ip { 2025-09-27 21:17:24.992259 | orchestrator | 21:17:24.992 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-27 21:17:24.992263 | orchestrator | 21:17:24.992 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-27 21:17:24.992302 | orchestrator | 21:17:24.992 STDOUT terraform:  } 2025-09-27 21:17:24.992360 | orchestrator | 21:17:24.992 STDOUT terraform:  } 2025-09-27 21:17:24.992386 | orchestrator | 21:17:24.992 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-27 21:17:24.992391 | orchestrator | 21:17:24.992 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-27 21:17:24.992395 | orchestrator | 21:17:24.992 STDOUT terraform:  + force_destroy = false 2025-09-27 21:17:24.992417 | orchestrator | 21:17:24.992 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.992436 | orchestrator | 21:17:24.992 STDOUT terraform:  + port_id = (known after apply) 2025-09-27 21:17:24.992448 | orchestrator | 21:17:24.992 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.992458 | orchestrator | 21:17:24.992 STDOUT terraform:  + router_id = (known after apply) 2025-09-27 21:17:24.992505 | orchestrator | 21:17:24.992 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-27 21:17:24.992531 | orchestrator | 21:17:24.992 STDOUT terraform:  } 2025-09-27 21:17:24.992537 | orchestrator | 21:17:24.992 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-27 21:17:24.992556 | orchestrator | 21:17:24.992 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-27 21:17:24.992568 | orchestrator | 21:17:24.992 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-27 21:17:24.992581 | orchestrator | 21:17:24.992 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 21:17:24.992585 | orchestrator | 21:17:24.992 STDOUT terraform:  + availability_zone_hints = [ 2025-09-27 21:17:24.992591 | orchestrator | 21:17:24.992 STDOUT terraform:  + "nova", 2025-09-27 21:17:24.992643 | orchestrator | 21:17:24.992 STDOUT terraform:  ] 2025-09-27 21:17:24.992656 | orchestrator | 21:17:24.992 STDOUT terraform:  + distributed = (known after apply) 2025-09-27 21:17:24.992695 | orchestrator | 21:17:24.992 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-27 21:17:24.992768 | orchestrator | 21:17:24.992 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-27 21:17:24.992827 | orchestrator | 21:17:24.992 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-27 21:17:24.992857 | orchestrator | 21:17:24.992 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.992881 | orchestrator | 21:17:24.992 STDOUT terraform:  + name = "testbed" 2025-09-27 21:17:24.992898 | orchestrator | 21:17:24.992 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.992902 | orchestrator | 21:17:24.992 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:24.992908 | orchestrator | 21:17:24.992 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-27 21:17:24.992952 | orchestrator | 21:17:24.992 STDOUT terraform:  } 2025-09-27 21:17:24.993003 | orchestrator | 21:17:24.992 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-27 21:17:24.993044 | orchestrator | 21:17:24.992 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-27 21:17:24.993048 | orchestrator | 21:17:24.993 STDOUT terraform:  + description = "ssh" 2025-09-27 21:17:24.993079 | orchestrator | 21:17:24.993 STDOUT terraform:  + direction = "ingress" 2025-09-27 21:17:24.993120 | orchestrator | 21:17:24.993 STDOUT terraform:  + ethertype = "IPv4" 2025-09-27 21:17:24.993198 | orchestrator | 21:17:24.993 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.993207 | orchestrator | 21:17:24.993 STDOUT terraform:  + port_range_max = 22 2025-09-27 21:17:24.993211 | orchestrator | 21:17:24.993 STDOUT terraform:  + port_range_min = 22 2025-09-27 21:17:24.993223 | orchestrator | 21:17:24.993 STDOUT terraform:  + protocol = "tcp" 2025-09-27 21:17:24.993229 | orchestrator | 21:17:24.993 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.993241 | orchestrator | 21:17:24.993 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-27 21:17:24.993281 | orchestrator | 21:17:24.993 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-27 21:17:24.993340 | orchestrator | 21:17:24.993 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-27 21:17:24.993345 | orchestrator | 21:17:24.993 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-27 21:17:24.993360 | orchestrator | 21:17:24.993 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:24.993366 | orchestrator | 21:17:24.993 STDOUT terraform:  } 2025-09-27 21:17:24.993430 | orchestrator | 21:17:24.993 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-27 21:17:24.993504 | orchestrator | 21:17:24.993 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-27 21:17:24.993518 | orchestrator | 21:17:24.993 STDOUT terraform:  + description = "wireguard" 2025-09-27 21:17:24.993524 | orchestrator | 21:17:24.993 STDOUT terraform:  + direction = "ingress" 2025-09-27 21:17:24.993542 | orchestrator | 21:17:24.993 STDOUT terraform:  + ethertype = "IPv4" 2025-09-27 21:17:24.993584 | orchestrator | 21:17:24.993 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.993591 | orchestrator | 21:17:24.993 STDOUT terraform:  + port_range_max = 51820 2025-09-27 21:17:24.993619 | orchestrator | 21:17:24.993 STDOUT terraform:  + port_range_min = 51820 2025-09-27 21:17:24.993648 | orchestrator | 21:17:24.993 STDOUT terraform:  + protocol = "udp" 2025-09-27 21:17:24.993715 | orchestrator | 21:17:24.993 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.993738 | orchestrator | 21:17:24.993 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-27 21:17:24.993753 | orchestrator | 21:17:24.993 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-27 21:17:24.993778 | orchestrator | 21:17:24.993 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-27 21:17:24.993856 | orchestrator | 21:17:24.993 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-27 21:17:24.993869 | orchestrator | 21:17:24.993 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:24.993874 | orchestrator | 21:17:24.993 STDOUT terraform:  } 2025-09-27 21:17:24.993915 | orchestrator | 21:17:24.993 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-27 21:17:24.993994 | orchestrator | 21:17:24.993 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-27 21:17:24.994000 | orchestrator | 21:17:24.993 STDOUT terraform:  + direction = "ingress" 2025-09-27 21:17:24.994009 | orchestrator | 21:17:24.993 STDOUT terraform:  + ethertype = "IPv4" 2025-09-27 21:17:24.994145 | orchestrator | 21:17:24.993 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.994175 | orchestrator | 21:17:24.994 STDOUT terraform:  + protocol = "tcp" 2025-09-27 21:17:24.994189 | orchestrator | 21:17:24.994 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.994193 | orchestrator | 21:17:24.994 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-27 21:17:24.994197 | orchestrator | 21:17:24.994 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-27 21:17:24.994201 | orchestrator | 21:17:24.994 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-27 21:17:24.994216 | orchestrator | 21:17:24.994 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-27 21:17:24.994242 | orchestrator | 21:17:24.994 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:24.994256 | orchestrator | 21:17:24.994 STDOUT terraform:  } 2025-09-27 21:17:24.994318 | orchestrator | 21:17:24.994 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-27 21:17:24.994359 | orchestrator | 21:17:24.994 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-27 21:17:24.994382 | orchestrator | 21:17:24.994 STDOUT terraform:  + direction = "ingress" 2025-09-27 21:17:24.994407 | orchestrator | 21:17:24.994 STDOUT terraform:  + ethertype = "IPv4" 2025-09-27 21:17:24.994439 | orchestrator | 21:17:24.994 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.994479 | orchestrator | 21:17:24.994 STDOUT terraform:  + protocol = "udp" 2025-09-27 21:17:24.994511 | orchestrator | 21:17:24.994 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.994553 | orchestrator | 21:17:24.994 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-27 21:17:24.994557 | orchestrator | 21:17:24.994 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-27 21:17:24.994601 | orchestrator | 21:17:24.994 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-27 21:17:24.994637 | orchestrator | 21:17:24.994 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-27 21:17:24.994691 | orchestrator | 21:17:24.994 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:24.994696 | orchestrator | 21:17:24.994 STDOUT terraform:  } 2025-09-27 21:17:24.994721 | orchestrator | 21:17:24.994 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-27 21:17:24.994791 | orchestrator | 21:17:24.994 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-27 21:17:24.994798 | orchestrator | 21:17:24.994 STDOUT terraform:  + direction = "ingress" 2025-09-27 21:17:24.994819 | orchestrator | 21:17:24.994 STDOUT terraform:  + ethertype = "IPv4" 2025-09-27 21:17:24.994853 | orchestrator | 21:17:24.994 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.994910 | orchestrator | 21:17:24.994 STDOUT terraform:  + protocol = "icmp" 2025-09-27 21:17:24.994943 | orchestrator | 21:17:24.994 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.994985 | orchestrator | 21:17:24.994 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-27 21:17:24.995015 | orchestrator | 21:17:24.994 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-27 21:17:24.995041 | orchestrator | 21:17:24.994 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-27 21:17:24.995064 | orchestrator | 21:17:24.994 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-27 21:17:24.995068 | orchestrator | 21:17:24.995 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:24.995072 | orchestrator | 21:17:24.995 STDOUT terraform:  } 2025-09-27 21:17:24.995116 | orchestrator | 21:17:24.995 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-27 21:17:24.995184 | orchestrator | 21:17:24.995 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-27 21:17:24.995191 | orchestrator | 21:17:24.995 STDOUT terraform:  + direction = "ingress" 2025-09-27 21:17:24.995229 | orchestrator | 21:17:24.995 STDOUT terraform:  + ethertype = "IPv4" 2025-09-27 21:17:24.995282 | orchestrator | 21:17:24.995 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.995327 | orchestrator | 21:17:24.995 STDOUT terraform:  + protocol = "tcp" 2025-09-27 21:17:24.995386 | orchestrator | 21:17:24.995 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.995391 | orchestrator | 21:17:24.995 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-27 21:17:24.995394 | orchestrator | 21:17:24.995 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-27 21:17:24.995406 | orchestrator | 21:17:24.995 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-27 21:17:24.995411 | orchestrator | 21:17:24.995 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-27 21:17:24.995450 | orchestrator | 21:17:24.995 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:24.995456 | orchestrator | 21:17:24.995 STDOUT terraform:  } 2025-09-27 21:17:24.995523 | orchestrator | 21:17:24.995 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-27 21:17:24.996853 | orchestrator | 21:17:24.995 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-27 21:17:24.996864 | orchestrator | 21:17:24.995 STDOUT terraform:  + direction = "ingress" 2025-09-27 21:17:24.996880 | orchestrator | 21:17:24.995 STDOUT terraform:  + ethertype = "IPv4" 2025-09-27 21:17:24.996885 | orchestrator | 21:17:24.995 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.996889 | orchestrator | 21:17:24.995 STDOUT terraform:  + protocol = "udp" 2025-09-27 21:17:24.996893 | orchestrator | 21:17:24.995 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.996909 | orchestrator | 21:17:24.995 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-27 21:17:24.996920 | orchestrator | 21:17:24.995 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-27 21:17:24.996932 | orchestrator | 21:17:24.995 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-27 21:17:24.996990 | orchestrator | 21:17:24.995 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-27 21:17:24.996995 | orchestrator | 21:17:24.995 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:24.996999 | orchestrator | 21:17:24.995 STDOUT terraform:  } 2025-09-27 21:17:24.997003 | orchestrator | 21:17:24.995 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-27 21:17:24.997007 | orchestrator | 21:17:24.995 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-27 21:17:24.997011 | orchestrator | 21:17:24.995 STDOUT terraform:  + direction = "ingress" 2025-09-27 21:17:24.997015 | orchestrator | 21:17:24.995 STDOUT terraform:  + ethertype = "IPv4" 2025-09-27 21:17:24.997019 | orchestrator | 21:17:24.995 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.997023 | orchestrator | 21:17:24.996 STDOUT terraform:  + protocol = "icmp" 2025-09-27 21:17:24.997026 | orchestrator | 21:17:24.996 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.997030 | orchestrator | 21:17:24.996 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-27 21:17:24.997034 | orchestrator | 21:17:24.996 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-27 21:17:24.997072 | orchestrator | 21:17:24.996 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-27 21:17:24.997076 | orchestrator | 21:17:24.996 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-27 21:17:24.997080 | orchestrator | 21:17:24.996 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:24.997084 | orchestrator | 21:17:24.996 STDOUT terraform:  } 2025-09-27 21:17:24.997095 | orchestrator | 21:17:24.996 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-27 21:17:24.997126 | orchestrator | 21:17:24.996 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-27 21:17:24.997139 | orchestrator | 21:17:24.996 STDOUT terraform:  + description = "vrrp" 2025-09-27 21:17:24.997143 | orchestrator | 21:17:24.996 STDOUT terraform:  + direction = "ingress" 2025-09-27 21:17:24.997147 | orchestrator | 21:17:24.996 STDOUT terraform:  + ethertype = "IPv4" 2025-09-27 21:17:24.997150 | orchestrator | 21:17:24.996 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.997154 | orchestrator | 21:17:24.996 STDOUT terraform:  + protocol = "112" 2025-09-27 21:17:24.997169 | orchestrator | 21:17:24.996 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.997181 | orchestrator | 21:17:24.996 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-27 21:17:24.997185 | orchestrator | 21:17:24.996 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-27 21:17:24.997247 | orchestrator | 21:17:24.996 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-27 21:17:24.997338 | orchestrator | 21:17:24.996 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-27 21:17:24.997343 | orchestrator | 21:17:24.996 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:24.997391 | orchestrator | 21:17:24.996 STDOUT terraform:  } 2025-09-27 21:17:24.997409 | orchestrator | 21:17:24.996 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-27 21:17:24.997422 | orchestrator | 21:17:24.996 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-27 21:17:24.997426 | orchestrator | 21:17:24.996 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 21:17:24.997430 | orchestrator | 21:17:24.996 STDOUT terraform:  + description = "management security group" 2025-09-27 21:17:24.997459 | orchestrator | 21:17:24.996 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.997488 | orchestrator | 21:17:24.996 STDOUT terraform:  + name = "testbed-management" 2025-09-27 21:17:24.997499 | orchestrator | 21:17:24.996 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.997596 | orchestrator | 21:17:24.996 STDOUT terraform:  + stateful = (known after apply) 2025-09-27 21:17:24.997601 | orchestrator | 21:17:24.996 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:24.997630 | orchestrator | 21:17:24.996 STDOUT terraform:  } 2025-09-27 21:17:24.997635 | orchestrator | 21:17:24.996 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-27 21:17:24.997656 | orchestrator | 21:17:24.996 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-27 21:17:24.997679 | orchestrator | 21:17:24.996 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 21:17:24.997683 | orchestrator | 21:17:24.996 STDOUT terraform:  + description = "node security group" 2025-09-27 21:17:24.997687 | orchestrator | 21:17:24.997 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:24.997691 | orchestrator | 21:17:24.997 STDOUT terraform:  + name = "testbed-node" 2025-09-27 21:17:24.997695 | orchestrator | 21:17:24.997 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:24.997705 | orchestrator | 21:17:24.997 STDOUT terraform:  + stateful = (known after apply) 2025-09-27 21:17:24.997709 | orchestrator | 21:17:24.997 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:24.997721 | orchestrator | 21:17:24.997 STDOUT terraform:  } 2025-09-27 21:17:24.997725 | orchestrator | 21:17:24.997 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-27 21:17:24.997729 | orchestrator | 21:17:24.997 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-27 21:17:24.997733 | orchestrator | 21:17:24.997 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 21:17:24.997793 | orchestrator | 21:17:24.997 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-27 21:17:24.997846 | orchestrator | 21:17:24.997 STDOUT terraform:  + dns_nameservers = [ 2025-09-27 21:17:24.997855 | orchestrator | 21:17:24.997 STDOUT terraform:  + "8.8.8.8", 2025-09-27 21:17:24.997887 | orchestrator | 21:17:24.997 STDOUT terraform:  + "9.9.9.9", 2025-09-27 21:17:24.997919 | orchestrator | 21:17:24.997 STDOUT terraform:  ] 2025-09-27 21:17:24.997930 | orchestrator | 21:17:24.997 STDOUT terraform:  + enable_dhcp = true 2025-09-27 21:17:24.997960 | orchestrator | 21:17:24.997 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-27 21:17:24.998073 | orchestrator | 21:17:24.997 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:25.005122 | orchestrator | 21:17:24.997 STDOUT terraform:  + ip_version = 4 2025-09-27 21:17:25.005134 | orchestrator | 21:17:24.997 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-27 21:17:25.005139 | orchestrator | 21:17:24.997 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-27 21:17:25.005143 | orchestrator | 21:17:24.997 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-27 21:17:25.005147 | orchestrator | 21:17:24.997 STDOUT terraform:  + network_id = (known after apply) 2025-09-27 21:17:25.005151 | orchestrator | 21:17:24.997 STDOUT terraform:  + no_gateway = false 2025-09-27 21:17:25.005155 | orchestrator | 21:17:24.997 STDOUT terraform:  + region = (known after apply) 2025-09-27 21:17:25.005186 | orchestrator | 21:17:24.997 STDOUT terraform:  + service_types = (known after apply) 2025-09-27 21:17:25.005190 | orchestrator | 21:17:24.997 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 21:17:25.005194 | orchestrator | 21:17:24.997 STDOUT terraform:  + allocation_pool { 2025-09-27 21:17:25.005198 | orchestrator | 21:17:24.997 STDOUT terraform:  + end = "192.168.31.250" 2025-09-27 21:17:25.005204 | orchestrator | 21:17:24.997 STDOUT terraform:  + start = "192.168.31.200" 2025-09-27 21:17:25.005208 | orchestrator | 21:17:24.997 STDOUT terraform:  } 2025-09-27 21:17:25.005212 | orchestrator | 21:17:24.997 STDOUT terraform:  } 2025-09-27 21:17:25.005216 | orchestrator | 21:17:24.997 STDOUT terraform:  # terraform_data.image will be created 2025-09-27 21:17:25.005220 | orchestrator | 21:17:24.997 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-27 21:17:25.005224 | orchestrator | 21:17:24.997 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:25.005228 | orchestrator | 21:17:24.997 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-27 21:17:25.005232 | orchestrator | 21:17:24.997 STDOUT terraform:  + output = (known after apply) 2025-09-27 21:17:25.005235 | orchestrator | 21:17:24.997 STDOUT terraform:  } 2025-09-27 21:17:25.005239 | orchestrator | 21:17:24.997 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-27 21:17:25.005251 | orchestrator | 21:17:24.997 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-27 21:17:25.005255 | orchestrator | 21:17:24.997 STDOUT terraform:  + id = (known after apply) 2025-09-27 21:17:25.005259 | orchestrator | 21:17:24.997 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-27 21:17:25.005263 | orchestrator | 21:17:24.997 STDOUT terraform:  + output = (known after apply) 2025-09-27 21:17:25.005267 | orchestrator | 21:17:24.997 STDOUT terraform:  } 2025-09-27 21:17:25.005271 | orchestrator | 21:17:24.997 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-27 21:17:25.005283 | orchestrator | 21:17:24.997 STDOUT terraform: Changes to Outputs: 2025-09-27 21:17:25.005287 | orchestrator | 21:17:24.997 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-27 21:17:25.005291 | orchestrator | 21:17:24.997 STDOUT terraform:  + private_key = (sensitive value) 2025-09-27 21:17:25.208852 | orchestrator | 21:17:25.208 STDOUT terraform: terraform_data.image: Creating... 2025-09-27 21:17:25.208914 | orchestrator | 21:17:25.208 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=cc6422bc-7e4f-a82e-03e1-2ecee22802f3] 2025-09-27 21:17:25.208921 | orchestrator | 21:17:25.208 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-27 21:17:25.208927 | orchestrator | 21:17:25.208 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=c71da7b7-3c2f-3e0b-7754-dfd8cd21861f] 2025-09-27 21:17:25.240211 | orchestrator | 21:17:25.240 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-27 21:17:25.266302 | orchestrator | 21:17:25.266 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-27 21:17:25.266372 | orchestrator | 21:17:25.266 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-27 21:17:25.266379 | orchestrator | 21:17:25.266 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-27 21:17:25.266402 | orchestrator | 21:17:25.266 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-27 21:17:25.266460 | orchestrator | 21:17:25.266 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-27 21:17:25.268982 | orchestrator | 21:17:25.268 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-27 21:17:25.269507 | orchestrator | 21:17:25.269 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-27 21:17:25.273649 | orchestrator | 21:17:25.273 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-27 21:17:25.274312 | orchestrator | 21:17:25.274 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-27 21:17:25.718454 | orchestrator | 21:17:25.718 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-27 21:17:25.720172 | orchestrator | 21:17:25.719 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-27 21:17:25.724302 | orchestrator | 21:17:25.724 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-27 21:17:25.725276 | orchestrator | 21:17:25.725 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-27 21:17:25.747999 | orchestrator | 21:17:25.747 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-09-27 21:17:25.752265 | orchestrator | 21:17:25.752 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-27 21:17:26.512060 | orchestrator | 21:17:26.511 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 2s [id=86e35027-87e8-4e53-b357-4b4121766c5c] 2025-09-27 21:17:26.520802 | orchestrator | 21:17:26.520 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-27 21:17:28.882311 | orchestrator | 21:17:28.881 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=270d9e8b-cef6-4542-9e07-9deadafed901] 2025-09-27 21:17:28.889055 | orchestrator | 21:17:28.888 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-27 21:17:28.900938 | orchestrator | 21:17:28.900 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=d6e45664-99ef-4d09-8a38-5c0568f04129] 2025-09-27 21:17:28.910049 | orchestrator | 21:17:28.909 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-27 21:17:28.915289 | orchestrator | 21:17:28.915 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=4d9ab6b1581d38f240c83f055358cd0c87c7c36e] 2025-09-27 21:17:28.918152 | orchestrator | 21:17:28.918 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-27 21:17:28.931083 | orchestrator | 21:17:28.929 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=f54ee983-9faf-4784-aff9-7d79079ed7ae] 2025-09-27 21:17:28.933721 | orchestrator | 21:17:28.933 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-27 21:17:28.939592 | orchestrator | 21:17:28.939 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=c7c2c329-81fb-49e1-8405-12e2c9115bb9] 2025-09-27 21:17:28.942909 | orchestrator | 21:17:28.942 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-27 21:17:28.952079 | orchestrator | 21:17:28.951 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=347ca9a0-83dc-4ac7-930f-213626cd3e96] 2025-09-27 21:17:28.963541 | orchestrator | 21:17:28.963 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-27 21:17:28.964228 | orchestrator | 21:17:28.964 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=5c98ed57-cbba-4a71-94c9-227184fafc60] 2025-09-27 21:17:28.966380 | orchestrator | 21:17:28.966 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=b50b5a646634740b327587371b519ab40be0753d] 2025-09-27 21:17:28.969448 | orchestrator | 21:17:28.969 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-27 21:17:28.974553 | orchestrator | 21:17:28.974 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-27 21:17:29.017964 | orchestrator | 21:17:29.017 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=02398e45-2b37-4a9b-beeb-c269fa72e24d] 2025-09-27 21:17:29.021499 | orchestrator | 21:17:29.021 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=6ce21c34-3cf8-4892-a084-795bd672264f] 2025-09-27 21:17:29.029732 | orchestrator | 21:17:29.029 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-27 21:17:29.048646 | orchestrator | 21:17:29.048 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=c35b6dae-9fd6-477e-b9cb-11e140c89f55] 2025-09-27 21:17:29.889301 | orchestrator | 21:17:29.888 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=7fa510ed-1d17-42a8-990d-dfa339dcbfb4] 2025-09-27 21:17:30.209650 | orchestrator | 21:17:30.209 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=bfb90ef7-9101-4862-ae44-1b06cd607218] 2025-09-27 21:17:30.218583 | orchestrator | 21:17:30.218 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-27 21:17:32.311106 | orchestrator | 21:17:32.310 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=2725e2ee-fa25-4636-a6d9-d82ade82b782] 2025-09-27 21:17:32.314126 | orchestrator | 21:17:32.313 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=27aad776-148c-4565-8829-34bf45547489] 2025-09-27 21:17:32.395435 | orchestrator | 21:17:32.395 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=ee3e927d-3b64-40df-8c8e-1bd9928ca124] 2025-09-27 21:17:32.398062 | orchestrator | 21:17:32.397 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=2556ace2-5a48-42f4-80f3-8864b24f8ba9] 2025-09-27 21:17:32.410548 | orchestrator | 21:17:32.410 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=15263243-d7d0-418e-bcde-dca37b998187] 2025-09-27 21:17:32.416624 | orchestrator | 21:17:32.416 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43] 2025-09-27 21:17:32.861962 | orchestrator | 21:17:32.861 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=7382b612-df7b-4f67-9e62-4213976c2417] 2025-09-27 21:17:32.868714 | orchestrator | 21:17:32.868 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-27 21:17:32.870209 | orchestrator | 21:17:32.869 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-27 21:17:32.871047 | orchestrator | 21:17:32.870 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-27 21:17:33.044144 | orchestrator | 21:17:33.043 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=9a9f673e-f748-444d-b627-b2f13a7e2a68] 2025-09-27 21:17:33.057579 | orchestrator | 21:17:33.057 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-27 21:17:33.061482 | orchestrator | 21:17:33.061 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-27 21:17:33.068239 | orchestrator | 21:17:33.068 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-27 21:17:33.070776 | orchestrator | 21:17:33.070 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-27 21:17:33.070939 | orchestrator | 21:17:33.070 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-27 21:17:33.071268 | orchestrator | 21:17:33.071 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-27 21:17:33.071922 | orchestrator | 21:17:33.071 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-27 21:17:33.075747 | orchestrator | 21:17:33.075 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-27 21:17:33.098321 | orchestrator | 21:17:33.097 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=99df8d7f-c0ed-4dfb-8a82-e2fd136b6762] 2025-09-27 21:17:33.114528 | orchestrator | 21:17:33.114 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-27 21:17:33.244010 | orchestrator | 21:17:33.243 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=3aaba5a7-2d6f-452d-96dd-18a73a3b8cfd] 2025-09-27 21:17:33.257212 | orchestrator | 21:17:33.257 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-27 21:17:33.561483 | orchestrator | 21:17:33.561 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=875668a9-c980-4751-9bd0-234e5edcbc06] 2025-09-27 21:17:33.571987 | orchestrator | 21:17:33.571 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-27 21:17:33.703121 | orchestrator | 21:17:33.702 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=87233fd6-6ebe-4ce0-b580-d0dc8f326d47] 2025-09-27 21:17:33.710656 | orchestrator | 21:17:33.710 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-27 21:17:33.748186 | orchestrator | 21:17:33.747 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=6c5ba333-db1a-4b8f-85a7-1d5c756ff0fa] 2025-09-27 21:17:33.755281 | orchestrator | 21:17:33.755 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-27 21:17:33.770613 | orchestrator | 21:17:33.770 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=b3c59a4d-4bdb-46cb-bad2-a9f8c1d49c78] 2025-09-27 21:17:33.772896 | orchestrator | 21:17:33.772 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=49460738-7bff-4ab3-8966-ee4af40fe1d0] 2025-09-27 21:17:33.781386 | orchestrator | 21:17:33.781 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-27 21:17:33.782164 | orchestrator | 21:17:33.782 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-27 21:17:33.795379 | orchestrator | 21:17:33.795 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=9583c250-012b-49fb-9bac-e5452a35290a] 2025-09-27 21:17:33.796954 | orchestrator | 21:17:33.796 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=ab13fab9-1204-46b2-8a83-377e0bfea1a6] 2025-09-27 21:17:33.800897 | orchestrator | 21:17:33.800 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=6ef6444e-093b-4b84-80e6-720b42df39d6] 2025-09-27 21:17:33.803262 | orchestrator | 21:17:33.803 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-27 21:17:33.936831 | orchestrator | 21:17:33.936 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=29f413c9-8440-47f3-b5d9-888c109ea529] 2025-09-27 21:17:34.046327 | orchestrator | 21:17:34.045 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=3fb5904e-5e8f-411b-af54-b7967ef9fc91] 2025-09-27 21:17:34.336242 | orchestrator | 21:17:34.335 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=3ab11452-e44e-4d87-ac34-219edf05d9a3] 2025-09-27 21:17:34.360062 | orchestrator | 21:17:34.359 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=1c187af4-283c-4242-8d94-311a0dc6a805] 2025-09-27 21:17:34.541941 | orchestrator | 21:17:34.541 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=6a3b69b1-bb86-4930-91b9-c5860982579d] 2025-09-27 21:17:35.315766 | orchestrator | 21:17:35.315 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=45d70bef-2579-4baf-b866-efb22ec1a7f5] 2025-09-27 21:17:35.427575 | orchestrator | 21:17:35.427 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=fbe12fd9-bfc6-4aa9-bbf9-06c8adaec14d] 2025-09-27 21:17:35.451812 | orchestrator | 21:17:35.451 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-27 21:17:35.461211 | orchestrator | 21:17:35.461 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-27 21:17:35.463855 | orchestrator | 21:17:35.463 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-27 21:17:35.471621 | orchestrator | 21:17:35.471 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-27 21:17:35.472528 | orchestrator | 21:17:35.472 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-27 21:17:35.472553 | orchestrator | 21:17:35.472 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-27 21:17:35.485988 | orchestrator | 21:17:35.485 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-27 21:17:35.541627 | orchestrator | 21:17:35.541 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 2s [id=d7b4401c-763c-4675-a307-bd3c64c66bdb] 2025-09-27 21:17:37.133513 | orchestrator | 21:17:37.133 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=d1bcf976-9ba0-40b8-ac07-7bfce35a7ce3] 2025-09-27 21:17:37.144317 | orchestrator | 21:17:37.144 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-27 21:17:37.153736 | orchestrator | 21:17:37.153 STDOUT terraform: local_file.inventory: Creating... 2025-09-27 21:17:37.153944 | orchestrator | 21:17:37.153 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-27 21:17:37.158170 | orchestrator | 21:17:37.157 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=47d65824ede72d91407adddca051b721e8e54327] 2025-09-27 21:17:37.158597 | orchestrator | 21:17:37.158 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=5f88db24b32b4fdc04553cdd217a5ae095a9deb5] 2025-09-27 21:17:37.929215 | orchestrator | 21:17:37.928 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=d1bcf976-9ba0-40b8-ac07-7bfce35a7ce3] 2025-09-27 21:17:45.464056 | orchestrator | 21:17:45.463 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-27 21:17:45.465873 | orchestrator | 21:17:45.465 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-27 21:17:45.476093 | orchestrator | 21:17:45.475 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-27 21:17:45.476167 | orchestrator | 21:17:45.476 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-27 21:17:45.478212 | orchestrator | 21:17:45.478 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-27 21:17:45.487567 | orchestrator | 21:17:45.487 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-27 21:17:55.466379 | orchestrator | 21:17:55.466 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-27 21:17:55.466519 | orchestrator | 21:17:55.466 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-27 21:17:55.476900 | orchestrator | 21:17:55.476 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-27 21:17:55.477113 | orchestrator | 21:17:55.476 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-27 21:17:55.479111 | orchestrator | 21:17:55.478 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-27 21:17:55.488371 | orchestrator | 21:17:55.488 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-27 21:17:56.033158 | orchestrator | 21:17:56.032 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=1625b11d-3544-49da-82fa-acbe89bc4080] 2025-09-27 21:17:56.123189 | orchestrator | 21:17:56.122 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=aa5402ed-cf69-4217-b268-10b24d6036d0] 2025-09-27 21:17:56.152276 | orchestrator | 21:17:56.151 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=023a7278-e529-4fca-9f64-edaa1e656469] 2025-09-27 21:18:05.468504 | orchestrator | 21:18:05.468 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-09-27 21:18:05.478126 | orchestrator | 21:18:05.477 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-09-27 21:18:05.479207 | orchestrator | 21:18:05.478 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-09-27 21:18:06.225419 | orchestrator | 21:18:06.225 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=b464ccee-51d1-4161-ab8a-159bf5503c74] 2025-09-27 21:18:06.326387 | orchestrator | 21:18:06.325 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=2ea8b4ad-c7ea-47ce-9946-58e054aeea0d] 2025-09-27 21:18:06.729899 | orchestrator | 21:18:06.729 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 32s [id=c5b76e1b-1426-4466-b04e-bc174b482db6] 2025-09-27 21:18:06.749060 | orchestrator | 21:18:06.748 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-27 21:18:06.757202 | orchestrator | 21:18:06.756 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=8580806959714865492] 2025-09-27 21:18:06.761648 | orchestrator | 21:18:06.761 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-27 21:18:06.763900 | orchestrator | 21:18:06.763 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-27 21:18:06.772856 | orchestrator | 21:18:06.772 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-27 21:18:06.780762 | orchestrator | 21:18:06.780 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-27 21:18:06.783597 | orchestrator | 21:18:06.783 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-27 21:18:06.786163 | orchestrator | 21:18:06.785 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-27 21:18:06.790860 | orchestrator | 21:18:06.790 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-27 21:18:06.794189 | orchestrator | 21:18:06.792 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-27 21:18:06.798985 | orchestrator | 21:18:06.798 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-27 21:18:06.817902 | orchestrator | 21:18:06.817 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-27 21:18:10.153403 | orchestrator | 21:18:10.152 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=2ea8b4ad-c7ea-47ce-9946-58e054aeea0d/5c98ed57-cbba-4a71-94c9-227184fafc60] 2025-09-27 21:18:10.172681 | orchestrator | 21:18:10.172 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=b464ccee-51d1-4161-ab8a-159bf5503c74/6ce21c34-3cf8-4892-a084-795bd672264f] 2025-09-27 21:18:10.200051 | orchestrator | 21:18:10.199 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=c5b76e1b-1426-4466-b04e-bc174b482db6/c7c2c329-81fb-49e1-8405-12e2c9115bb9] 2025-09-27 21:18:10.381244 | orchestrator | 21:18:10.380 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=2ea8b4ad-c7ea-47ce-9946-58e054aeea0d/270d9e8b-cef6-4542-9e07-9deadafed901] 2025-09-27 21:18:10.383136 | orchestrator | 21:18:10.382 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=b464ccee-51d1-4161-ab8a-159bf5503c74/347ca9a0-83dc-4ac7-930f-213626cd3e96] 2025-09-27 21:18:10.406622 | orchestrator | 21:18:10.406 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=c5b76e1b-1426-4466-b04e-bc174b482db6/02398e45-2b37-4a9b-beeb-c269fa72e24d] 2025-09-27 21:18:16.484644 | orchestrator | 21:18:16.484 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=2ea8b4ad-c7ea-47ce-9946-58e054aeea0d/f54ee983-9faf-4784-aff9-7d79079ed7ae] 2025-09-27 21:18:16.495391 | orchestrator | 21:18:16.495 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 9s [id=c5b76e1b-1426-4466-b04e-bc174b482db6/d6e45664-99ef-4d09-8a38-5c0568f04129] 2025-09-27 21:18:16.519035 | orchestrator | 21:18:16.518 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=b464ccee-51d1-4161-ab8a-159bf5503c74/c35b6dae-9fd6-477e-b9cb-11e140c89f55] 2025-09-27 21:18:16.819098 | orchestrator | 21:18:16.818 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-27 21:18:26.819507 | orchestrator | 21:18:26.819 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-27 21:18:27.193715 | orchestrator | 21:18:27.193 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=761d015f-30cd-458d-968c-3769e0c75227] 2025-09-27 21:18:27.210730 | orchestrator | 21:18:27.210 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-27 21:18:27.210817 | orchestrator | 21:18:27.210 STDOUT terraform: Outputs: 2025-09-27 21:18:27.210828 | orchestrator | 21:18:27.210 STDOUT terraform: manager_address = 2025-09-27 21:18:27.210837 | orchestrator | 21:18:27.210 STDOUT terraform: private_key = 2025-09-27 21:18:27.624586 | orchestrator | ok: Runtime: 0:01:08.177706 2025-09-27 21:18:27.661412 | 2025-09-27 21:18:27.661594 | TASK [Create infrastructure (stable)] 2025-09-27 21:18:28.195317 | orchestrator | skipping: Conditional result was False 2025-09-27 21:18:28.218231 | 2025-09-27 21:18:28.218409 | TASK [Fetch manager address] 2025-09-27 21:18:28.683645 | orchestrator | ok 2025-09-27 21:18:28.693491 | 2025-09-27 21:18:28.693625 | TASK [Set manager_host address] 2025-09-27 21:18:28.772495 | orchestrator | ok 2025-09-27 21:18:28.781680 | 2025-09-27 21:18:28.781796 | LOOP [Update ansible collections] 2025-09-27 21:18:29.681332 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-27 21:18:29.681704 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-27 21:18:29.681764 | orchestrator | Starting galaxy collection install process 2025-09-27 21:18:29.681804 | orchestrator | Process install dependency map 2025-09-27 21:18:29.681839 | orchestrator | Starting collection install process 2025-09-27 21:18:29.681873 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2025-09-27 21:18:29.681911 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2025-09-27 21:18:29.681951 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-27 21:18:29.682021 | orchestrator | ok: Item: commons Runtime: 0:00:00.559092 2025-09-27 21:18:30.639683 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-27 21:18:30.639870 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-27 21:18:30.639938 | orchestrator | Starting galaxy collection install process 2025-09-27 21:18:30.639989 | orchestrator | Process install dependency map 2025-09-27 21:18:30.640036 | orchestrator | Starting collection install process 2025-09-27 21:18:30.640079 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2025-09-27 21:18:30.640121 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2025-09-27 21:18:30.640160 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-27 21:18:30.640221 | orchestrator | ok: Item: services Runtime: 0:00:00.670836 2025-09-27 21:18:30.663434 | 2025-09-27 21:18:30.663608 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-27 21:18:41.201396 | orchestrator | ok 2025-09-27 21:18:41.213457 | 2025-09-27 21:18:41.213641 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-27 21:19:41.263694 | orchestrator | ok 2025-09-27 21:19:41.273524 | 2025-09-27 21:19:41.273662 | TASK [Fetch manager ssh hostkey] 2025-09-27 21:19:42.848620 | orchestrator | Output suppressed because no_log was given 2025-09-27 21:19:42.864179 | 2025-09-27 21:19:42.864338 | TASK [Get ssh keypair from terraform environment] 2025-09-27 21:19:43.399701 | orchestrator | ok: Runtime: 0:00:00.005590 2025-09-27 21:19:43.416776 | 2025-09-27 21:19:43.416936 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-27 21:19:43.450022 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-27 21:19:43.459164 | 2025-09-27 21:19:43.459293 | TASK [Run manager part 0] 2025-09-27 21:19:44.393534 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-27 21:19:44.445954 | orchestrator | 2025-09-27 21:19:44.446011 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-27 21:19:44.446049 | orchestrator | 2025-09-27 21:19:44.446064 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-27 21:19:46.215958 | orchestrator | ok: [testbed-manager] 2025-09-27 21:19:46.216043 | orchestrator | 2025-09-27 21:19:46.216075 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-27 21:19:46.216091 | orchestrator | 2025-09-27 21:19:46.216106 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-27 21:19:48.126832 | orchestrator | ok: [testbed-manager] 2025-09-27 21:19:48.126881 | orchestrator | 2025-09-27 21:19:48.126890 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-27 21:19:48.821336 | orchestrator | ok: [testbed-manager] 2025-09-27 21:19:48.821391 | orchestrator | 2025-09-27 21:19:48.821399 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-27 21:19:48.875970 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:19:48.876023 | orchestrator | 2025-09-27 21:19:48.876033 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-27 21:19:48.910231 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:19:48.910282 | orchestrator | 2025-09-27 21:19:48.910289 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-27 21:19:48.938085 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:19:48.938136 | orchestrator | 2025-09-27 21:19:48.938144 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-27 21:19:48.963312 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:19:48.963343 | orchestrator | 2025-09-27 21:19:48.963348 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-27 21:19:48.989745 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:19:48.989772 | orchestrator | 2025-09-27 21:19:48.989777 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-27 21:19:49.017973 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:19:49.018044 | orchestrator | 2025-09-27 21:19:49.018052 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-27 21:19:49.045101 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:19:49.045140 | orchestrator | 2025-09-27 21:19:49.045146 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-27 21:19:49.781560 | orchestrator | changed: [testbed-manager] 2025-09-27 21:19:49.781667 | orchestrator | 2025-09-27 21:19:49.781675 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-27 21:22:31.335792 | orchestrator | changed: [testbed-manager] 2025-09-27 21:22:31.335875 | orchestrator | 2025-09-27 21:22:31.335892 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-27 21:24:09.941947 | orchestrator | changed: [testbed-manager] 2025-09-27 21:24:09.941996 | orchestrator | 2025-09-27 21:24:09.942004 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-27 21:24:33.946647 | orchestrator | changed: [testbed-manager] 2025-09-27 21:24:33.946716 | orchestrator | 2025-09-27 21:24:33.946726 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-27 21:24:43.071431 | orchestrator | changed: [testbed-manager] 2025-09-27 21:24:43.071523 | orchestrator | 2025-09-27 21:24:43.071540 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-27 21:24:43.112793 | orchestrator | ok: [testbed-manager] 2025-09-27 21:24:43.112849 | orchestrator | 2025-09-27 21:24:43.112863 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-27 21:24:43.892401 | orchestrator | ok: [testbed-manager] 2025-09-27 21:24:43.892480 | orchestrator | 2025-09-27 21:24:43.892498 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-27 21:24:44.631494 | orchestrator | changed: [testbed-manager] 2025-09-27 21:24:44.632410 | orchestrator | 2025-09-27 21:24:44.632435 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-27 21:24:50.768789 | orchestrator | changed: [testbed-manager] 2025-09-27 21:24:50.768900 | orchestrator | 2025-09-27 21:24:50.768962 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-27 21:24:56.565273 | orchestrator | changed: [testbed-manager] 2025-09-27 21:24:56.565491 | orchestrator | 2025-09-27 21:24:56.565508 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-27 21:24:59.226839 | orchestrator | changed: [testbed-manager] 2025-09-27 21:24:59.226885 | orchestrator | 2025-09-27 21:24:59.226895 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-27 21:25:01.020309 | orchestrator | changed: [testbed-manager] 2025-09-27 21:25:01.020351 | orchestrator | 2025-09-27 21:25:01.020356 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-27 21:25:02.100449 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-27 21:25:02.100486 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-27 21:25:02.100491 | orchestrator | 2025-09-27 21:25:02.100496 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-27 21:25:02.141773 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-27 21:25:02.141819 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-27 21:25:02.141824 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-27 21:25:02.141830 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-27 21:25:05.388564 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-27 21:25:05.388603 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-27 21:25:05.388608 | orchestrator | 2025-09-27 21:25:05.388613 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-27 21:25:05.964063 | orchestrator | changed: [testbed-manager] 2025-09-27 21:25:05.964139 | orchestrator | 2025-09-27 21:25:05.964151 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-27 21:28:24.979356 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-27 21:28:24.979494 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-27 21:28:24.979511 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-27 21:28:24.979523 | orchestrator | 2025-09-27 21:28:24.979534 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-27 21:28:27.144171 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-27 21:28:27.144247 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-27 21:28:27.144262 | orchestrator | 2025-09-27 21:28:27.144275 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-27 21:28:27.144287 | orchestrator | 2025-09-27 21:28:27.144298 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-27 21:28:28.489687 | orchestrator | ok: [testbed-manager] 2025-09-27 21:28:28.489763 | orchestrator | 2025-09-27 21:28:28.489779 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-27 21:28:28.530579 | orchestrator | ok: [testbed-manager] 2025-09-27 21:28:28.530633 | orchestrator | 2025-09-27 21:28:28.530645 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-27 21:28:28.590524 | orchestrator | ok: [testbed-manager] 2025-09-27 21:28:28.590585 | orchestrator | 2025-09-27 21:28:28.590601 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-27 21:28:29.370510 | orchestrator | changed: [testbed-manager] 2025-09-27 21:28:29.370582 | orchestrator | 2025-09-27 21:28:29.370597 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-27 21:28:30.118280 | orchestrator | changed: [testbed-manager] 2025-09-27 21:28:30.118352 | orchestrator | 2025-09-27 21:28:30.118368 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-27 21:28:31.554870 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-27 21:28:31.554963 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-27 21:28:31.554978 | orchestrator | 2025-09-27 21:28:31.555005 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-27 21:28:32.935240 | orchestrator | changed: [testbed-manager] 2025-09-27 21:28:32.935295 | orchestrator | 2025-09-27 21:28:32.935303 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-27 21:28:34.719879 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-27 21:28:34.719964 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-27 21:28:34.719977 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-27 21:28:34.719989 | orchestrator | 2025-09-27 21:28:34.720002 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-27 21:28:34.774807 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:28:34.774904 | orchestrator | 2025-09-27 21:28:34.774924 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-27 21:28:35.352426 | orchestrator | changed: [testbed-manager] 2025-09-27 21:28:35.352513 | orchestrator | 2025-09-27 21:28:35.352531 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-27 21:28:35.419384 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:28:35.419461 | orchestrator | 2025-09-27 21:28:35.419481 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-27 21:28:36.313753 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-27 21:28:36.313903 | orchestrator | changed: [testbed-manager] 2025-09-27 21:28:36.313915 | orchestrator | 2025-09-27 21:28:36.313920 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-27 21:28:36.343275 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:28:36.343305 | orchestrator | 2025-09-27 21:28:36.343312 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-27 21:28:36.371959 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:28:36.372009 | orchestrator | 2025-09-27 21:28:36.372016 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-27 21:28:36.401697 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:28:36.401759 | orchestrator | 2025-09-27 21:28:36.401772 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-27 21:28:36.462147 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:28:36.462208 | orchestrator | 2025-09-27 21:28:36.462224 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-27 21:28:37.196431 | orchestrator | ok: [testbed-manager] 2025-09-27 21:28:37.196508 | orchestrator | 2025-09-27 21:28:37.196524 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-27 21:28:37.196537 | orchestrator | 2025-09-27 21:28:37.196548 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-27 21:28:38.661906 | orchestrator | ok: [testbed-manager] 2025-09-27 21:28:38.661945 | orchestrator | 2025-09-27 21:28:38.661951 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-27 21:28:39.557941 | orchestrator | changed: [testbed-manager] 2025-09-27 21:28:39.557980 | orchestrator | 2025-09-27 21:28:39.557986 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:28:39.557992 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-27 21:28:39.557997 | orchestrator | 2025-09-27 21:28:39.818456 | orchestrator | ok: Runtime: 0:08:55.897964 2025-09-27 21:28:39.836258 | 2025-09-27 21:28:39.836411 | TASK [Point out that the log in on the manager is now possible] 2025-09-27 21:28:39.874246 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-27 21:28:39.883698 | 2025-09-27 21:28:39.883817 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-27 21:28:39.918306 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-27 21:28:39.927057 | 2025-09-27 21:28:39.927175 | TASK [Run manager part 1 + 2] 2025-09-27 21:28:40.720462 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-27 21:28:40.775127 | orchestrator | 2025-09-27 21:28:40.775226 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-27 21:28:40.775242 | orchestrator | 2025-09-27 21:28:40.775271 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-27 21:28:43.755426 | orchestrator | ok: [testbed-manager] 2025-09-27 21:28:43.755485 | orchestrator | 2025-09-27 21:28:43.755510 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-27 21:28:43.794038 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:28:43.794092 | orchestrator | 2025-09-27 21:28:43.794102 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-27 21:28:43.829875 | orchestrator | ok: [testbed-manager] 2025-09-27 21:28:43.829931 | orchestrator | 2025-09-27 21:28:43.829940 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-27 21:28:43.864460 | orchestrator | ok: [testbed-manager] 2025-09-27 21:28:43.864546 | orchestrator | 2025-09-27 21:28:43.864562 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-27 21:28:43.933445 | orchestrator | ok: [testbed-manager] 2025-09-27 21:28:43.933503 | orchestrator | 2025-09-27 21:28:43.933510 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-27 21:28:43.997932 | orchestrator | ok: [testbed-manager] 2025-09-27 21:28:43.997982 | orchestrator | 2025-09-27 21:28:43.997989 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-27 21:28:44.036524 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-27 21:28:44.036570 | orchestrator | 2025-09-27 21:28:44.036576 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-27 21:28:44.717097 | orchestrator | ok: [testbed-manager] 2025-09-27 21:28:44.717150 | orchestrator | 2025-09-27 21:28:44.717159 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-27 21:28:44.757916 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:28:44.757963 | orchestrator | 2025-09-27 21:28:44.757969 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-27 21:28:46.040906 | orchestrator | changed: [testbed-manager] 2025-09-27 21:28:46.040968 | orchestrator | 2025-09-27 21:28:46.040980 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-27 21:28:46.587787 | orchestrator | ok: [testbed-manager] 2025-09-27 21:28:46.587986 | orchestrator | 2025-09-27 21:28:46.588004 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-27 21:28:47.627301 | orchestrator | changed: [testbed-manager] 2025-09-27 21:28:47.627371 | orchestrator | 2025-09-27 21:28:47.627387 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-27 21:29:05.518930 | orchestrator | changed: [testbed-manager] 2025-09-27 21:29:05.519026 | orchestrator | 2025-09-27 21:29:05.519042 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-27 21:29:06.189133 | orchestrator | ok: [testbed-manager] 2025-09-27 21:29:06.189209 | orchestrator | 2025-09-27 21:29:06.189226 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-27 21:29:06.238420 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:29:06.238478 | orchestrator | 2025-09-27 21:29:06.238492 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-27 21:29:07.180160 | orchestrator | changed: [testbed-manager] 2025-09-27 21:29:07.180237 | orchestrator | 2025-09-27 21:29:07.180254 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-27 21:29:08.171248 | orchestrator | changed: [testbed-manager] 2025-09-27 21:29:08.171294 | orchestrator | 2025-09-27 21:29:08.171303 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-27 21:29:08.754808 | orchestrator | changed: [testbed-manager] 2025-09-27 21:29:08.754851 | orchestrator | 2025-09-27 21:29:08.754860 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-27 21:29:08.793042 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-27 21:29:08.793171 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-27 21:29:08.793188 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-27 21:29:08.793200 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-27 21:29:10.774560 | orchestrator | changed: [testbed-manager] 2025-09-27 21:29:10.774607 | orchestrator | 2025-09-27 21:29:10.774615 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-27 21:29:19.655640 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-27 21:29:19.655687 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-27 21:29:19.655697 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-27 21:29:19.655703 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-27 21:29:19.655713 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-27 21:29:19.655719 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-27 21:29:19.655725 | orchestrator | 2025-09-27 21:29:19.655731 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-27 21:29:20.700851 | orchestrator | changed: [testbed-manager] 2025-09-27 21:29:20.700891 | orchestrator | 2025-09-27 21:29:20.700899 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-27 21:29:20.744180 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:29:20.744222 | orchestrator | 2025-09-27 21:29:20.744231 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-27 21:29:24.075167 | orchestrator | changed: [testbed-manager] 2025-09-27 21:29:24.075255 | orchestrator | 2025-09-27 21:29:24.075269 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-27 21:29:24.115953 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:29:24.115994 | orchestrator | 2025-09-27 21:29:24.116001 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-27 21:31:02.938265 | orchestrator | changed: [testbed-manager] 2025-09-27 21:31:02.938334 | orchestrator | 2025-09-27 21:31:02.938348 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-27 21:31:04.108431 | orchestrator | ok: [testbed-manager] 2025-09-27 21:31:04.108510 | orchestrator | 2025-09-27 21:31:04.108527 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:31:04.108540 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-27 21:31:04.108552 | orchestrator | 2025-09-27 21:31:04.542755 | orchestrator | ok: Runtime: 0:02:23.963975 2025-09-27 21:31:04.559373 | 2025-09-27 21:31:04.559560 | TASK [Reboot manager] 2025-09-27 21:31:06.094831 | orchestrator | ok: Runtime: 0:00:00.934106 2025-09-27 21:31:06.111332 | 2025-09-27 21:31:06.111482 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-27 21:31:22.540498 | orchestrator | ok 2025-09-27 21:31:22.550266 | 2025-09-27 21:31:22.550384 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-27 21:32:22.595962 | orchestrator | ok 2025-09-27 21:32:22.606311 | 2025-09-27 21:32:22.606435 | TASK [Deploy manager + bootstrap nodes] 2025-09-27 21:32:25.038423 | orchestrator | 2025-09-27 21:32:25.038653 | orchestrator | # DEPLOY MANAGER 2025-09-27 21:32:25.038680 | orchestrator | 2025-09-27 21:32:25.038696 | orchestrator | + set -e 2025-09-27 21:32:25.038709 | orchestrator | + echo 2025-09-27 21:32:25.038724 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-27 21:32:25.038741 | orchestrator | + echo 2025-09-27 21:32:25.038796 | orchestrator | + cat /opt/manager-vars.sh 2025-09-27 21:32:25.042116 | orchestrator | export NUMBER_OF_NODES=6 2025-09-27 21:32:25.042143 | orchestrator | 2025-09-27 21:32:25.042156 | orchestrator | export CEPH_VERSION=reef 2025-09-27 21:32:25.042169 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-27 21:32:25.042182 | orchestrator | export MANAGER_VERSION=latest 2025-09-27 21:32:25.042203 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-27 21:32:25.042214 | orchestrator | 2025-09-27 21:32:25.042232 | orchestrator | export ARA=false 2025-09-27 21:32:25.042244 | orchestrator | export DEPLOY_MODE=manager 2025-09-27 21:32:25.042262 | orchestrator | export TEMPEST=false 2025-09-27 21:32:25.042273 | orchestrator | export IS_ZUUL=true 2025-09-27 21:32:25.042284 | orchestrator | 2025-09-27 21:32:25.042302 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.173 2025-09-27 21:32:25.042314 | orchestrator | export EXTERNAL_API=false 2025-09-27 21:32:25.042325 | orchestrator | 2025-09-27 21:32:25.042336 | orchestrator | export IMAGE_USER=ubuntu 2025-09-27 21:32:25.042350 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-27 21:32:25.042360 | orchestrator | 2025-09-27 21:32:25.042371 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-27 21:32:25.042735 | orchestrator | 2025-09-27 21:32:25.042756 | orchestrator | + echo 2025-09-27 21:32:25.042769 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-27 21:32:25.043737 | orchestrator | ++ export INTERACTIVE=false 2025-09-27 21:32:25.043757 | orchestrator | ++ INTERACTIVE=false 2025-09-27 21:32:25.043775 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-27 21:32:25.043787 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-27 21:32:25.044015 | orchestrator | + source /opt/manager-vars.sh 2025-09-27 21:32:25.044032 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-27 21:32:25.044044 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-27 21:32:25.044247 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-27 21:32:25.044266 | orchestrator | ++ CEPH_VERSION=reef 2025-09-27 21:32:25.044277 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-27 21:32:25.044293 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-27 21:32:25.044304 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-27 21:32:25.044315 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-27 21:32:25.044326 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-27 21:32:25.044345 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-27 21:32:25.044357 | orchestrator | ++ export ARA=false 2025-09-27 21:32:25.044368 | orchestrator | ++ ARA=false 2025-09-27 21:32:25.044379 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-27 21:32:25.044390 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-27 21:32:25.044641 | orchestrator | ++ export TEMPEST=false 2025-09-27 21:32:25.044660 | orchestrator | ++ TEMPEST=false 2025-09-27 21:32:25.044671 | orchestrator | ++ export IS_ZUUL=true 2025-09-27 21:32:25.044682 | orchestrator | ++ IS_ZUUL=true 2025-09-27 21:32:25.044693 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.173 2025-09-27 21:32:25.044704 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.173 2025-09-27 21:32:25.044715 | orchestrator | ++ export EXTERNAL_API=false 2025-09-27 21:32:25.044726 | orchestrator | ++ EXTERNAL_API=false 2025-09-27 21:32:25.044737 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-27 21:32:25.044747 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-27 21:32:25.044758 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-27 21:32:25.044769 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-27 21:32:25.044780 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-27 21:32:25.044791 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-27 21:32:25.044803 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-27 21:32:25.098858 | orchestrator | + docker version 2025-09-27 21:32:25.365825 | orchestrator | Client: Docker Engine - Community 2025-09-27 21:32:25.365931 | orchestrator | Version: 27.5.1 2025-09-27 21:32:25.365949 | orchestrator | API version: 1.47 2025-09-27 21:32:25.365961 | orchestrator | Go version: go1.22.11 2025-09-27 21:32:25.365973 | orchestrator | Git commit: 9f9e405 2025-09-27 21:32:25.365985 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-27 21:32:25.365998 | orchestrator | OS/Arch: linux/amd64 2025-09-27 21:32:25.366009 | orchestrator | Context: default 2025-09-27 21:32:25.366068 | orchestrator | 2025-09-27 21:32:25.366080 | orchestrator | Server: Docker Engine - Community 2025-09-27 21:32:25.366091 | orchestrator | Engine: 2025-09-27 21:32:25.366103 | orchestrator | Version: 27.5.1 2025-09-27 21:32:25.366114 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-27 21:32:25.366154 | orchestrator | Go version: go1.22.11 2025-09-27 21:32:25.366166 | orchestrator | Git commit: 4c9b3b0 2025-09-27 21:32:25.366177 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-27 21:32:25.366188 | orchestrator | OS/Arch: linux/amd64 2025-09-27 21:32:25.366199 | orchestrator | Experimental: false 2025-09-27 21:32:25.366210 | orchestrator | containerd: 2025-09-27 21:32:25.366221 | orchestrator | Version: v1.7.28 2025-09-27 21:32:25.366232 | orchestrator | GitCommit: b98a3aace656320842a23f4a392a33f46af97866 2025-09-27 21:32:25.366243 | orchestrator | runc: 2025-09-27 21:32:25.366254 | orchestrator | Version: 1.3.0 2025-09-27 21:32:25.366265 | orchestrator | GitCommit: v1.3.0-0-g4ca628d1 2025-09-27 21:32:25.366276 | orchestrator | docker-init: 2025-09-27 21:32:25.366287 | orchestrator | Version: 0.19.0 2025-09-27 21:32:25.366298 | orchestrator | GitCommit: de40ad0 2025-09-27 21:32:25.368938 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-27 21:32:25.378559 | orchestrator | + set -e 2025-09-27 21:32:25.378598 | orchestrator | + source /opt/manager-vars.sh 2025-09-27 21:32:25.378639 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-27 21:32:25.378652 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-27 21:32:25.378674 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-27 21:32:25.378685 | orchestrator | ++ CEPH_VERSION=reef 2025-09-27 21:32:25.378696 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-27 21:32:25.378707 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-27 21:32:25.378718 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-27 21:32:25.378757 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-27 21:32:25.378785 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-27 21:32:25.378796 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-27 21:32:25.378807 | orchestrator | ++ export ARA=false 2025-09-27 21:32:25.378818 | orchestrator | ++ ARA=false 2025-09-27 21:32:25.378829 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-27 21:32:25.378840 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-27 21:32:25.378850 | orchestrator | ++ export TEMPEST=false 2025-09-27 21:32:25.378861 | orchestrator | ++ TEMPEST=false 2025-09-27 21:32:25.378872 | orchestrator | ++ export IS_ZUUL=true 2025-09-27 21:32:25.378883 | orchestrator | ++ IS_ZUUL=true 2025-09-27 21:32:25.378893 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.173 2025-09-27 21:32:25.378904 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.173 2025-09-27 21:32:25.378915 | orchestrator | ++ export EXTERNAL_API=false 2025-09-27 21:32:25.378926 | orchestrator | ++ EXTERNAL_API=false 2025-09-27 21:32:25.378936 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-27 21:32:25.378947 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-27 21:32:25.378958 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-27 21:32:25.378969 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-27 21:32:25.379002 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-27 21:32:25.379014 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-27 21:32:25.379029 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-27 21:32:25.379040 | orchestrator | ++ export INTERACTIVE=false 2025-09-27 21:32:25.379051 | orchestrator | ++ INTERACTIVE=false 2025-09-27 21:32:25.379061 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-27 21:32:25.379086 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-27 21:32:25.379101 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-27 21:32:25.379175 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-27 21:32:25.379236 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-09-27 21:32:25.387029 | orchestrator | + set -e 2025-09-27 21:32:25.387083 | orchestrator | + VERSION=reef 2025-09-27 21:32:25.388002 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-27 21:32:25.393742 | orchestrator | + [[ -n ceph_version: reef ]] 2025-09-27 21:32:25.393798 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-09-27 21:32:25.400551 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-09-27 21:32:25.407441 | orchestrator | + set -e 2025-09-27 21:32:25.407455 | orchestrator | + VERSION=2024.2 2025-09-27 21:32:25.408646 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-27 21:32:25.412277 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-09-27 21:32:25.412290 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-09-27 21:32:25.417791 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-27 21:32:25.418913 | orchestrator | ++ semver latest 7.0.0 2025-09-27 21:32:25.483343 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-27 21:32:25.483429 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-27 21:32:25.483442 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-27 21:32:25.483453 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-27 21:32:25.572439 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-27 21:32:25.573752 | orchestrator | + source /opt/venv/bin/activate 2025-09-27 21:32:25.574970 | orchestrator | ++ deactivate nondestructive 2025-09-27 21:32:25.574999 | orchestrator | ++ '[' -n '' ']' 2025-09-27 21:32:25.575021 | orchestrator | ++ '[' -n '' ']' 2025-09-27 21:32:25.575033 | orchestrator | ++ hash -r 2025-09-27 21:32:25.575048 | orchestrator | ++ '[' -n '' ']' 2025-09-27 21:32:25.575059 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-27 21:32:25.575073 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-27 21:32:25.575201 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-27 21:32:25.575376 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-27 21:32:25.575394 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-27 21:32:25.575418 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-27 21:32:25.575430 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-27 21:32:25.575446 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-27 21:32:25.575461 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-27 21:32:25.575579 | orchestrator | ++ export PATH 2025-09-27 21:32:25.575676 | orchestrator | ++ '[' -n '' ']' 2025-09-27 21:32:25.575707 | orchestrator | ++ '[' -z '' ']' 2025-09-27 21:32:25.575760 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-27 21:32:25.575844 | orchestrator | ++ PS1='(venv) ' 2025-09-27 21:32:25.575858 | orchestrator | ++ export PS1 2025-09-27 21:32:25.575869 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-27 21:32:25.575929 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-27 21:32:25.575943 | orchestrator | ++ hash -r 2025-09-27 21:32:25.576017 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-27 21:32:26.791940 | orchestrator | 2025-09-27 21:32:26.792038 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-27 21:32:26.792051 | orchestrator | 2025-09-27 21:32:26.792061 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-27 21:32:27.310839 | orchestrator | ok: [testbed-manager] 2025-09-27 21:32:27.310938 | orchestrator | 2025-09-27 21:32:27.310953 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-27 21:32:28.225233 | orchestrator | changed: [testbed-manager] 2025-09-27 21:32:28.225337 | orchestrator | 2025-09-27 21:32:28.225352 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-27 21:32:28.225364 | orchestrator | 2025-09-27 21:32:28.225376 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-27 21:32:30.537023 | orchestrator | ok: [testbed-manager] 2025-09-27 21:32:30.537740 | orchestrator | 2025-09-27 21:32:30.537773 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-27 21:32:30.592997 | orchestrator | ok: [testbed-manager] 2025-09-27 21:32:30.593051 | orchestrator | 2025-09-27 21:32:30.593068 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-27 21:32:31.048858 | orchestrator | changed: [testbed-manager] 2025-09-27 21:32:31.048940 | orchestrator | 2025-09-27 21:32:31.048955 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-27 21:32:31.096935 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:32:31.097007 | orchestrator | 2025-09-27 21:32:31.097022 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-27 21:32:31.422373 | orchestrator | changed: [testbed-manager] 2025-09-27 21:32:31.422458 | orchestrator | 2025-09-27 21:32:31.422474 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-27 21:32:31.473445 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:32:31.473507 | orchestrator | 2025-09-27 21:32:31.473523 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-27 21:32:31.781728 | orchestrator | ok: [testbed-manager] 2025-09-27 21:32:31.781814 | orchestrator | 2025-09-27 21:32:31.781829 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-27 21:32:31.890466 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:32:31.890541 | orchestrator | 2025-09-27 21:32:31.890554 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-27 21:32:31.890564 | orchestrator | 2025-09-27 21:32:31.890576 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-27 21:32:33.523270 | orchestrator | ok: [testbed-manager] 2025-09-27 21:32:33.523357 | orchestrator | 2025-09-27 21:32:33.523372 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-27 21:32:33.626120 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-27 21:32:33.626191 | orchestrator | 2025-09-27 21:32:33.626203 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-27 21:32:33.683828 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-27 21:32:33.683867 | orchestrator | 2025-09-27 21:32:33.683878 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-27 21:32:34.754223 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-27 21:32:34.754313 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-27 21:32:34.754328 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-27 21:32:34.754340 | orchestrator | 2025-09-27 21:32:34.754352 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-27 21:32:36.543430 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-27 21:32:36.543524 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-27 21:32:36.543542 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-27 21:32:36.543554 | orchestrator | 2025-09-27 21:32:36.543567 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-27 21:32:37.159704 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-27 21:32:37.159790 | orchestrator | changed: [testbed-manager] 2025-09-27 21:32:37.159806 | orchestrator | 2025-09-27 21:32:37.159818 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-27 21:32:37.787407 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-27 21:32:37.787494 | orchestrator | changed: [testbed-manager] 2025-09-27 21:32:37.787510 | orchestrator | 2025-09-27 21:32:37.787523 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-27 21:32:37.849480 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:32:37.849551 | orchestrator | 2025-09-27 21:32:37.849565 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-27 21:32:38.178005 | orchestrator | ok: [testbed-manager] 2025-09-27 21:32:38.178121 | orchestrator | 2025-09-27 21:32:38.178136 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-27 21:32:38.255072 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-27 21:32:38.255138 | orchestrator | 2025-09-27 21:32:38.255149 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-27 21:32:39.250959 | orchestrator | changed: [testbed-manager] 2025-09-27 21:32:39.251049 | orchestrator | 2025-09-27 21:32:39.251065 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-27 21:32:40.078513 | orchestrator | changed: [testbed-manager] 2025-09-27 21:32:40.078601 | orchestrator | 2025-09-27 21:32:40.078661 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-27 21:32:50.478187 | orchestrator | changed: [testbed-manager] 2025-09-27 21:32:50.478286 | orchestrator | 2025-09-27 21:32:50.478310 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-27 21:32:50.526070 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:32:50.526146 | orchestrator | 2025-09-27 21:32:50.526160 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-27 21:32:50.526172 | orchestrator | 2025-09-27 21:32:50.526184 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-27 21:32:52.144209 | orchestrator | ok: [testbed-manager] 2025-09-27 21:32:52.144314 | orchestrator | 2025-09-27 21:32:52.144387 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-27 21:32:52.260235 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-27 21:32:52.260326 | orchestrator | 2025-09-27 21:32:52.260341 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-27 21:32:52.321391 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-27 21:32:52.321468 | orchestrator | 2025-09-27 21:32:52.321482 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-27 21:32:54.572289 | orchestrator | ok: [testbed-manager] 2025-09-27 21:32:54.572384 | orchestrator | 2025-09-27 21:32:54.572400 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-27 21:32:54.621657 | orchestrator | ok: [testbed-manager] 2025-09-27 21:32:54.621724 | orchestrator | 2025-09-27 21:32:54.621739 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-27 21:32:54.735811 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-27 21:32:54.735902 | orchestrator | 2025-09-27 21:32:54.735917 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-27 21:32:57.461009 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-27 21:32:57.461099 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-27 21:32:57.461113 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-27 21:32:57.461125 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-27 21:32:57.461136 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-27 21:32:57.461147 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-27 21:32:57.461158 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-27 21:32:57.461169 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-27 21:32:57.461180 | orchestrator | 2025-09-27 21:32:57.461192 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-27 21:32:58.101152 | orchestrator | changed: [testbed-manager] 2025-09-27 21:32:58.101245 | orchestrator | 2025-09-27 21:32:58.101263 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-27 21:32:58.767460 | orchestrator | changed: [testbed-manager] 2025-09-27 21:32:58.767548 | orchestrator | 2025-09-27 21:32:58.767564 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-27 21:32:58.846765 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-27 21:32:58.846844 | orchestrator | 2025-09-27 21:32:58.846857 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-27 21:33:00.062892 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-27 21:33:00.062977 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-27 21:33:00.062991 | orchestrator | 2025-09-27 21:33:00.063003 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-27 21:33:00.686689 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:00.686780 | orchestrator | 2025-09-27 21:33:00.686795 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-27 21:33:00.735980 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:33:00.736060 | orchestrator | 2025-09-27 21:33:00.736075 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-27 21:33:00.812697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-27 21:33:00.812747 | orchestrator | 2025-09-27 21:33:00.812761 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-27 21:33:01.438917 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:01.439001 | orchestrator | 2025-09-27 21:33:01.439017 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-27 21:33:01.498757 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-27 21:33:01.498833 | orchestrator | 2025-09-27 21:33:01.498846 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-27 21:33:02.881692 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-27 21:33:02.881773 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-27 21:33:02.881785 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:02.881796 | orchestrator | 2025-09-27 21:33:02.881806 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-27 21:33:03.521110 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:03.521202 | orchestrator | 2025-09-27 21:33:03.521218 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-27 21:33:03.570646 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:33:03.570725 | orchestrator | 2025-09-27 21:33:03.570742 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-27 21:33:03.666167 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-27 21:33:03.666235 | orchestrator | 2025-09-27 21:33:03.666248 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-27 21:33:04.231345 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:04.231408 | orchestrator | 2025-09-27 21:33:04.231417 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-27 21:33:04.634413 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:04.634508 | orchestrator | 2025-09-27 21:33:04.634524 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-27 21:33:05.920689 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-27 21:33:05.920773 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-27 21:33:05.920788 | orchestrator | 2025-09-27 21:33:05.920802 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-27 21:33:06.559511 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:06.559637 | orchestrator | 2025-09-27 21:33:06.559656 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-27 21:33:06.956657 | orchestrator | ok: [testbed-manager] 2025-09-27 21:33:06.956724 | orchestrator | 2025-09-27 21:33:06.956733 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-27 21:33:07.324485 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:07.324570 | orchestrator | 2025-09-27 21:33:07.324631 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-27 21:33:07.361731 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:33:07.361767 | orchestrator | 2025-09-27 21:33:07.361779 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-27 21:33:07.432304 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-27 21:33:07.432389 | orchestrator | 2025-09-27 21:33:07.432403 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-27 21:33:07.475463 | orchestrator | ok: [testbed-manager] 2025-09-27 21:33:07.475497 | orchestrator | 2025-09-27 21:33:07.475509 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-27 21:33:09.516566 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-27 21:33:09.516702 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-27 21:33:09.516719 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-27 21:33:09.516732 | orchestrator | 2025-09-27 21:33:09.516744 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-27 21:33:10.219243 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:10.219318 | orchestrator | 2025-09-27 21:33:10.219332 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-27 21:33:10.978755 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:10.978844 | orchestrator | 2025-09-27 21:33:10.978859 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-27 21:33:11.643724 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:11.643807 | orchestrator | 2025-09-27 21:33:11.643822 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-27 21:33:11.708550 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-27 21:33:11.708634 | orchestrator | 2025-09-27 21:33:11.708646 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-27 21:33:11.754764 | orchestrator | ok: [testbed-manager] 2025-09-27 21:33:11.754828 | orchestrator | 2025-09-27 21:33:11.754841 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-27 21:33:12.485290 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-27 21:33:12.485373 | orchestrator | 2025-09-27 21:33:12.485388 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-27 21:33:12.558570 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-27 21:33:12.558687 | orchestrator | 2025-09-27 21:33:12.558700 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-27 21:33:13.261628 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:13.261718 | orchestrator | 2025-09-27 21:33:13.261733 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-27 21:33:13.838082 | orchestrator | ok: [testbed-manager] 2025-09-27 21:33:13.838161 | orchestrator | 2025-09-27 21:33:13.838175 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-27 21:33:13.888624 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:33:13.888697 | orchestrator | 2025-09-27 21:33:13.888710 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-27 21:33:13.937542 | orchestrator | ok: [testbed-manager] 2025-09-27 21:33:13.937633 | orchestrator | 2025-09-27 21:33:13.937646 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-27 21:33:14.758172 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:14.758260 | orchestrator | 2025-09-27 21:33:14.758275 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-27 21:34:20.661497 | orchestrator | changed: [testbed-manager] 2025-09-27 21:34:20.661662 | orchestrator | 2025-09-27 21:34:20.661680 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-27 21:34:21.555485 | orchestrator | ok: [testbed-manager] 2025-09-27 21:34:21.555652 | orchestrator | 2025-09-27 21:34:21.555670 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-27 21:34:21.640378 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:34:21.640473 | orchestrator | 2025-09-27 21:34:21.640491 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-27 21:34:24.017344 | orchestrator | changed: [testbed-manager] 2025-09-27 21:34:24.017432 | orchestrator | 2025-09-27 21:34:24.017443 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-27 21:34:24.070883 | orchestrator | ok: [testbed-manager] 2025-09-27 21:34:24.070988 | orchestrator | 2025-09-27 21:34:24.071003 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-27 21:34:24.071015 | orchestrator | 2025-09-27 21:34:24.071026 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-27 21:34:24.116718 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:34:24.116807 | orchestrator | 2025-09-27 21:34:24.116822 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-27 21:35:24.164853 | orchestrator | Pausing for 60 seconds 2025-09-27 21:35:24.164981 | orchestrator | changed: [testbed-manager] 2025-09-27 21:35:24.164999 | orchestrator | 2025-09-27 21:35:24.165012 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-27 21:35:28.610305 | orchestrator | changed: [testbed-manager] 2025-09-27 21:35:28.610463 | orchestrator | 2025-09-27 21:35:28.610483 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-27 21:36:10.188974 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-27 21:36:10.189093 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-27 21:36:10.189109 | orchestrator | changed: [testbed-manager] 2025-09-27 21:36:10.189152 | orchestrator | 2025-09-27 21:36:10.189165 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-27 21:36:19.948906 | orchestrator | changed: [testbed-manager] 2025-09-27 21:36:19.949031 | orchestrator | 2025-09-27 21:36:19.949048 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-27 21:36:20.022666 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-27 21:36:20.022786 | orchestrator | 2025-09-27 21:36:20.022803 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-27 21:36:20.022816 | orchestrator | 2025-09-27 21:36:20.022827 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-27 21:36:20.077243 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:36:20.077338 | orchestrator | 2025-09-27 21:36:20.077386 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2025-09-27 21:36:20.144494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2025-09-27 21:36:20.144583 | orchestrator | 2025-09-27 21:36:20.144600 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2025-09-27 21:36:20.957185 | orchestrator | changed: [testbed-manager] 2025-09-27 21:36:20.957289 | orchestrator | 2025-09-27 21:36:20.957304 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2025-09-27 21:36:24.804050 | orchestrator | ok: [testbed-manager] 2025-09-27 21:36:24.804161 | orchestrator | 2025-09-27 21:36:24.804179 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2025-09-27 21:36:24.874756 | orchestrator | ok: [testbed-manager] => { 2025-09-27 21:36:24.874825 | orchestrator | "version_check_result.stdout_lines": [ 2025-09-27 21:36:24.874840 | orchestrator | "=== OSISM Container Version Check ===", 2025-09-27 21:36:24.874852 | orchestrator | "Checking running containers against expected versions...", 2025-09-27 21:36:24.874864 | orchestrator | "", 2025-09-27 21:36:24.874876 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2025-09-27 21:36:24.874887 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2025-09-27 21:36:24.874898 | orchestrator | " Enabled: true", 2025-09-27 21:36:24.874910 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2025-09-27 21:36:24.874921 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:36:24.874932 | orchestrator | "", 2025-09-27 21:36:24.874944 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2025-09-27 21:36:24.874955 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2025-09-27 21:36:24.874966 | orchestrator | " Enabled: true", 2025-09-27 21:36:24.874977 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2025-09-27 21:36:24.874988 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:36:24.874999 | orchestrator | "", 2025-09-27 21:36:24.875010 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2025-09-27 21:36:24.875021 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2025-09-27 21:36:24.875032 | orchestrator | " Enabled: true", 2025-09-27 21:36:24.875043 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2025-09-27 21:36:24.875054 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:36:24.875065 | orchestrator | "", 2025-09-27 21:36:24.875076 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2025-09-27 21:36:24.875087 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2025-09-27 21:36:24.875098 | orchestrator | " Enabled: true", 2025-09-27 21:36:24.875109 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2025-09-27 21:36:24.875120 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:36:24.875132 | orchestrator | "", 2025-09-27 21:36:24.875143 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2025-09-27 21:36:24.875153 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-09-27 21:36:24.875192 | orchestrator | " Enabled: true", 2025-09-27 21:36:24.875204 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-09-27 21:36:24.875215 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:36:24.875226 | orchestrator | "", 2025-09-27 21:36:24.875236 | orchestrator | "Checking service: osismclient (OSISM Client)", 2025-09-27 21:36:24.875247 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-27 21:36:24.875258 | orchestrator | " Enabled: true", 2025-09-27 21:36:24.875269 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-27 21:36:24.875279 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:36:24.875290 | orchestrator | "", 2025-09-27 21:36:24.875301 | orchestrator | "Checking service: ara-server (ARA Server)", 2025-09-27 21:36:24.875312 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2025-09-27 21:36:24.875323 | orchestrator | " Enabled: true", 2025-09-27 21:36:24.875336 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2025-09-27 21:36:24.875389 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:36:24.875402 | orchestrator | "", 2025-09-27 21:36:24.875414 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2025-09-27 21:36:24.875432 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-09-27 21:36:24.875444 | orchestrator | " Enabled: true", 2025-09-27 21:36:24.875457 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-09-27 21:36:24.875469 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:36:24.875481 | orchestrator | "", 2025-09-27 21:36:24.875492 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2025-09-27 21:36:24.875504 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2025-09-27 21:36:24.875516 | orchestrator | " Enabled: true", 2025-09-27 21:36:24.875533 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2025-09-27 21:36:24.875545 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:36:24.875557 | orchestrator | "", 2025-09-27 21:36:24.875569 | orchestrator | "Checking service: redis (Redis Cache)", 2025-09-27 21:36:24.875581 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-09-27 21:36:24.875592 | orchestrator | " Enabled: true", 2025-09-27 21:36:24.875604 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-09-27 21:36:24.875616 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:36:24.875628 | orchestrator | "", 2025-09-27 21:36:24.875640 | orchestrator | "Checking service: api (OSISM API Service)", 2025-09-27 21:36:24.875652 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-27 21:36:24.875664 | orchestrator | " Enabled: true", 2025-09-27 21:36:24.875676 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-27 21:36:24.875688 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:36:24.875699 | orchestrator | "", 2025-09-27 21:36:24.875710 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2025-09-27 21:36:24.875720 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-27 21:36:24.875731 | orchestrator | " Enabled: true", 2025-09-27 21:36:24.875741 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-27 21:36:24.875752 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:36:24.875763 | orchestrator | "", 2025-09-27 21:36:24.875773 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2025-09-27 21:36:24.875784 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-27 21:36:24.875795 | orchestrator | " Enabled: true", 2025-09-27 21:36:24.875805 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-27 21:36:24.875816 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:36:24.875827 | orchestrator | "", 2025-09-27 21:36:24.875837 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2025-09-27 21:36:24.875848 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-27 21:36:24.875858 | orchestrator | " Enabled: true", 2025-09-27 21:36:24.875869 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-27 21:36:24.875888 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:36:24.875899 | orchestrator | "", 2025-09-27 21:36:24.875910 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2025-09-27 21:36:24.875937 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-27 21:36:24.875948 | orchestrator | " Enabled: true", 2025-09-27 21:36:24.875959 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-27 21:36:24.875969 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:36:24.875980 | orchestrator | "", 2025-09-27 21:36:24.875991 | orchestrator | "=== Summary ===", 2025-09-27 21:36:24.876002 | orchestrator | "Errors (version mismatches): 0", 2025-09-27 21:36:24.876012 | orchestrator | "Warnings (expected containers not running): 0", 2025-09-27 21:36:24.876023 | orchestrator | "", 2025-09-27 21:36:24.876033 | orchestrator | "✅ All running containers match expected versions!" 2025-09-27 21:36:24.876044 | orchestrator | ] 2025-09-27 21:36:24.876055 | orchestrator | } 2025-09-27 21:36:24.876067 | orchestrator | 2025-09-27 21:36:24.876078 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2025-09-27 21:36:24.922684 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:36:24.922727 | orchestrator | 2025-09-27 21:36:24.922739 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:36:24.922753 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-27 21:36:24.922764 | orchestrator | 2025-09-27 21:36:25.022461 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-27 21:36:25.022558 | orchestrator | + deactivate 2025-09-27 21:36:25.022574 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-27 21:36:25.022587 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-27 21:36:25.022598 | orchestrator | + export PATH 2025-09-27 21:36:25.022610 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-27 21:36:25.022622 | orchestrator | + '[' -n '' ']' 2025-09-27 21:36:25.022633 | orchestrator | + hash -r 2025-09-27 21:36:25.022644 | orchestrator | + '[' -n '' ']' 2025-09-27 21:36:25.022655 | orchestrator | + unset VIRTUAL_ENV 2025-09-27 21:36:25.022666 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-27 21:36:25.022677 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-27 21:36:25.022688 | orchestrator | + unset -f deactivate 2025-09-27 21:36:25.022699 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-27 21:36:25.031282 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-27 21:36:25.031305 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-27 21:36:25.031316 | orchestrator | + local max_attempts=60 2025-09-27 21:36:25.031328 | orchestrator | + local name=ceph-ansible 2025-09-27 21:36:25.031339 | orchestrator | + local attempt_num=1 2025-09-27 21:36:25.032015 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:36:25.068397 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:36:25.068475 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-27 21:36:25.068488 | orchestrator | + local max_attempts=60 2025-09-27 21:36:25.068500 | orchestrator | + local name=kolla-ansible 2025-09-27 21:36:25.068511 | orchestrator | + local attempt_num=1 2025-09-27 21:36:25.069770 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-27 21:36:25.104749 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:36:25.104834 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-27 21:36:25.104847 | orchestrator | + local max_attempts=60 2025-09-27 21:36:25.104859 | orchestrator | + local name=osism-ansible 2025-09-27 21:36:25.104871 | orchestrator | + local attempt_num=1 2025-09-27 21:36:25.105810 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-27 21:36:25.133067 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:36:25.133104 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-27 21:36:25.133117 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-27 21:36:25.757097 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-27 21:36:25.967098 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-27 21:36:25.967216 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2025-09-27 21:36:25.967230 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2025-09-27 21:36:25.967240 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-27 21:36:25.967251 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up About a minute (healthy) 8000/tcp 2025-09-27 21:36:25.967260 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up About a minute (healthy) 2025-09-27 21:36:25.967286 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up About a minute (healthy) 2025-09-27 21:36:25.967296 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 57 seconds (healthy) 2025-09-27 21:36:25.967305 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up About a minute (healthy) 2025-09-27 21:36:25.967315 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb 2 minutes ago Up About a minute (healthy) 3306/tcp 2025-09-27 21:36:25.967324 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up About a minute (healthy) 2025-09-27 21:36:25.967334 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis 2 minutes ago Up About a minute (healthy) 6379/tcp 2025-09-27 21:36:25.967390 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2025-09-27 21:36:25.967401 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-27 21:36:25.967411 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2025-09-27 21:36:25.967420 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up About a minute (healthy) 2025-09-27 21:36:25.973620 | orchestrator | ++ semver latest 7.0.0 2025-09-27 21:36:26.025559 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-27 21:36:26.025617 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-27 21:36:26.025631 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-27 21:36:26.029177 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-27 21:36:38.186441 | orchestrator | 2025-09-27 21:36:38 | INFO  | Task e8901580-692e-40c9-b5ec-b6b81871b3c5 (resolvconf) was prepared for execution. 2025-09-27 21:36:38.186558 | orchestrator | 2025-09-27 21:36:38 | INFO  | It takes a moment until task e8901580-692e-40c9-b5ec-b6b81871b3c5 (resolvconf) has been started and output is visible here. 2025-09-27 21:36:50.788099 | orchestrator | 2025-09-27 21:36:50.788218 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-27 21:36:50.788234 | orchestrator | 2025-09-27 21:36:50.788247 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-27 21:36:50.788258 | orchestrator | Saturday 27 September 2025 21:36:41 +0000 (0:00:00.109) 0:00:00.109 **** 2025-09-27 21:36:50.788269 | orchestrator | ok: [testbed-manager] 2025-09-27 21:36:50.788282 | orchestrator | 2025-09-27 21:36:50.788293 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-27 21:36:50.788305 | orchestrator | Saturday 27 September 2025 21:36:45 +0000 (0:00:03.366) 0:00:03.476 **** 2025-09-27 21:36:50.788361 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:36:50.788374 | orchestrator | 2025-09-27 21:36:50.788385 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-27 21:36:50.788396 | orchestrator | Saturday 27 September 2025 21:36:45 +0000 (0:00:00.055) 0:00:03.531 **** 2025-09-27 21:36:50.788419 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-27 21:36:50.788433 | orchestrator | 2025-09-27 21:36:50.788444 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-27 21:36:50.788454 | orchestrator | Saturday 27 September 2025 21:36:45 +0000 (0:00:00.078) 0:00:03.610 **** 2025-09-27 21:36:50.788466 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-27 21:36:50.788476 | orchestrator | 2025-09-27 21:36:50.788487 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-27 21:36:50.788498 | orchestrator | Saturday 27 September 2025 21:36:45 +0000 (0:00:00.075) 0:00:03.685 **** 2025-09-27 21:36:50.788509 | orchestrator | ok: [testbed-manager] 2025-09-27 21:36:50.788520 | orchestrator | 2025-09-27 21:36:50.788531 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-27 21:36:50.788541 | orchestrator | Saturday 27 September 2025 21:36:46 +0000 (0:00:00.882) 0:00:04.568 **** 2025-09-27 21:36:50.788552 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:36:50.788563 | orchestrator | 2025-09-27 21:36:50.788574 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-27 21:36:50.788585 | orchestrator | Saturday 27 September 2025 21:36:46 +0000 (0:00:00.063) 0:00:04.631 **** 2025-09-27 21:36:50.788595 | orchestrator | ok: [testbed-manager] 2025-09-27 21:36:50.788606 | orchestrator | 2025-09-27 21:36:50.788618 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-27 21:36:50.788630 | orchestrator | Saturday 27 September 2025 21:36:46 +0000 (0:00:00.464) 0:00:05.096 **** 2025-09-27 21:36:50.788642 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:36:50.788654 | orchestrator | 2025-09-27 21:36:50.788666 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-27 21:36:50.788679 | orchestrator | Saturday 27 September 2025 21:36:46 +0000 (0:00:00.087) 0:00:05.184 **** 2025-09-27 21:36:50.788691 | orchestrator | changed: [testbed-manager] 2025-09-27 21:36:50.788703 | orchestrator | 2025-09-27 21:36:50.788715 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-27 21:36:50.788728 | orchestrator | Saturday 27 September 2025 21:36:47 +0000 (0:00:00.518) 0:00:05.702 **** 2025-09-27 21:36:50.788739 | orchestrator | changed: [testbed-manager] 2025-09-27 21:36:50.788751 | orchestrator | 2025-09-27 21:36:50.788763 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-27 21:36:50.788775 | orchestrator | Saturday 27 September 2025 21:36:48 +0000 (0:00:01.044) 0:00:06.747 **** 2025-09-27 21:36:50.788787 | orchestrator | ok: [testbed-manager] 2025-09-27 21:36:50.788799 | orchestrator | 2025-09-27 21:36:50.788810 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-27 21:36:50.788845 | orchestrator | Saturday 27 September 2025 21:36:49 +0000 (0:00:00.957) 0:00:07.704 **** 2025-09-27 21:36:50.788861 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-27 21:36:50.788879 | orchestrator | 2025-09-27 21:36:50.788896 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-27 21:36:50.788915 | orchestrator | Saturday 27 September 2025 21:36:49 +0000 (0:00:00.076) 0:00:07.781 **** 2025-09-27 21:36:50.788935 | orchestrator | changed: [testbed-manager] 2025-09-27 21:36:50.788954 | orchestrator | 2025-09-27 21:36:50.788971 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:36:50.788983 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-27 21:36:50.788993 | orchestrator | 2025-09-27 21:36:50.789005 | orchestrator | 2025-09-27 21:36:50.789015 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:36:50.789026 | orchestrator | Saturday 27 September 2025 21:36:50 +0000 (0:00:01.147) 0:00:08.929 **** 2025-09-27 21:36:50.789036 | orchestrator | =============================================================================== 2025-09-27 21:36:50.789047 | orchestrator | Gathering Facts --------------------------------------------------------- 3.37s 2025-09-27 21:36:50.789058 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.15s 2025-09-27 21:36:50.789068 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.04s 2025-09-27 21:36:50.789078 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.96s 2025-09-27 21:36:50.789089 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.88s 2025-09-27 21:36:50.789099 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2025-09-27 21:36:50.789128 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.46s 2025-09-27 21:36:50.789140 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-09-27 21:36:50.789157 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-09-27 21:36:50.789168 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-09-27 21:36:50.789179 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-09-27 21:36:50.789189 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-09-27 21:36:50.789200 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-09-27 21:36:51.067949 | orchestrator | + osism apply sshconfig 2025-09-27 21:37:03.029076 | orchestrator | 2025-09-27 21:37:03 | INFO  | Task 4ab521b2-130c-4771-90c8-5ebe8ed80e4f (sshconfig) was prepared for execution. 2025-09-27 21:37:03.029200 | orchestrator | 2025-09-27 21:37:03 | INFO  | It takes a moment until task 4ab521b2-130c-4771-90c8-5ebe8ed80e4f (sshconfig) has been started and output is visible here. 2025-09-27 21:37:14.044547 | orchestrator | 2025-09-27 21:37:14.044669 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-27 21:37:14.044686 | orchestrator | 2025-09-27 21:37:14.044698 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-27 21:37:14.044710 | orchestrator | Saturday 27 September 2025 21:37:06 +0000 (0:00:00.122) 0:00:00.122 **** 2025-09-27 21:37:14.044721 | orchestrator | ok: [testbed-manager] 2025-09-27 21:37:14.044734 | orchestrator | 2025-09-27 21:37:14.044745 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-27 21:37:14.044756 | orchestrator | Saturday 27 September 2025 21:37:07 +0000 (0:00:00.504) 0:00:00.627 **** 2025-09-27 21:37:14.044767 | orchestrator | changed: [testbed-manager] 2025-09-27 21:37:14.044779 | orchestrator | 2025-09-27 21:37:14.044790 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-27 21:37:14.044825 | orchestrator | Saturday 27 September 2025 21:37:07 +0000 (0:00:00.451) 0:00:01.079 **** 2025-09-27 21:37:14.044836 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-27 21:37:14.044847 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-27 21:37:14.044858 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-27 21:37:14.044869 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-27 21:37:14.044880 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-27 21:37:14.044891 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-27 21:37:14.044901 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-27 21:37:14.044912 | orchestrator | 2025-09-27 21:37:14.044923 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-27 21:37:14.044934 | orchestrator | Saturday 27 September 2025 21:37:13 +0000 (0:00:05.345) 0:00:06.424 **** 2025-09-27 21:37:14.044945 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:37:14.044955 | orchestrator | 2025-09-27 21:37:14.044966 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-27 21:37:14.044976 | orchestrator | Saturday 27 September 2025 21:37:13 +0000 (0:00:00.081) 0:00:06.505 **** 2025-09-27 21:37:14.044987 | orchestrator | changed: [testbed-manager] 2025-09-27 21:37:14.044998 | orchestrator | 2025-09-27 21:37:14.045008 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:37:14.045021 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:37:14.045033 | orchestrator | 2025-09-27 21:37:14.045044 | orchestrator | 2025-09-27 21:37:14.045055 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:37:14.045066 | orchestrator | Saturday 27 September 2025 21:37:13 +0000 (0:00:00.583) 0:00:07.089 **** 2025-09-27 21:37:14.045079 | orchestrator | =============================================================================== 2025-09-27 21:37:14.045091 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.35s 2025-09-27 21:37:14.045104 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2025-09-27 21:37:14.045116 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.50s 2025-09-27 21:37:14.045128 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.45s 2025-09-27 21:37:14.045141 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-09-27 21:37:14.305100 | orchestrator | + osism apply known-hosts 2025-09-27 21:37:26.363880 | orchestrator | 2025-09-27 21:37:26 | INFO  | Task 26dbce23-a1a7-48f1-8ae7-9ddc815ab88f (known-hosts) was prepared for execution. 2025-09-27 21:37:26.363982 | orchestrator | 2025-09-27 21:37:26 | INFO  | It takes a moment until task 26dbce23-a1a7-48f1-8ae7-9ddc815ab88f (known-hosts) has been started and output is visible here. 2025-09-27 21:37:43.054708 | orchestrator | 2025-09-27 21:37:43.054843 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-27 21:37:43.054858 | orchestrator | 2025-09-27 21:37:43.054869 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-27 21:37:43.054881 | orchestrator | Saturday 27 September 2025 21:37:30 +0000 (0:00:00.174) 0:00:00.174 **** 2025-09-27 21:37:43.054891 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-27 21:37:43.054902 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-27 21:37:43.054911 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-27 21:37:43.054921 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-27 21:37:43.054956 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-27 21:37:43.054974 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-27 21:37:43.054984 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-27 21:37:43.055017 | orchestrator | 2025-09-27 21:37:43.055028 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-27 21:37:43.055038 | orchestrator | Saturday 27 September 2025 21:37:36 +0000 (0:00:06.003) 0:00:06.177 **** 2025-09-27 21:37:43.055049 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-27 21:37:43.055061 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-27 21:37:43.055071 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-27 21:37:43.055080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-27 21:37:43.055090 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-27 21:37:43.055100 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-27 21:37:43.055110 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-27 21:37:43.055119 | orchestrator | 2025-09-27 21:37:43.055129 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:37:43.055138 | orchestrator | Saturday 27 September 2025 21:37:36 +0000 (0:00:00.181) 0:00:06.359 **** 2025-09-27 21:37:43.055148 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEvyUkifTwqC55Fxcfa1eNdbli04wCIlh6nWWp50EsRZeAV0rL8YzRpQqIzgkAu3Er55n5vwfElHX7H0BjyQQJc=) 2025-09-27 21:37:43.055162 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC12GaJ+3eTpNwEaqYNA6V4SapkFCoKPvZHs8t9Umu4UxUL6HuRTZ5we1vSNw24WxZIpY9+JaR0WtNVeo9AnUVNHa39qcD+v85H9PED1NrBpRMDTvUvDP/OxOq386ywtKNm9n/p53up9a9u1t/U7OS8E4spPh6uc2ayANxxwCHLN/Fj5h4IWtWfvXhVFHXihsZH/JwmY0vMxR0QeysBmZApj1ZGso+gOGRR6vz5XH46kUS3omBXJmIsVJG54V2QtTIry5l3Avv/ua27UYSHMZRkzgd6i+110C9MD2xgsrPINXsBsrWa8NeN34qN+BNSqKljqcPK2v+DqU9NebZXash45njeKm5h8v4LKYGX+yg54hWQMLy63ZCLWVfcu+nF4btMavFDv8hmfDu6Yy9Sa0GTVWpfQpiDKqT9BVAFgCU1HD2pfYwfKChkSga0YtKtdug/PfzVLTrGlHfBn34j7R0B6vICideqG9+pgAvSkhjfGQeuhqmQGznLUyOYm609sVE=) 2025-09-27 21:37:43.055175 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOzZSPeDmm5uOpw9w5fE7zVcssoXX3X5d3TPREGFLfDB) 2025-09-27 21:37:43.055186 | orchestrator | 2025-09-27 21:37:43.055196 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:37:43.055206 | orchestrator | Saturday 27 September 2025 21:37:37 +0000 (0:00:01.210) 0:00:07.570 **** 2025-09-27 21:37:43.055218 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHDiaxJ18JAHV3ijpYzYZsrkWIoBq2yJaCd0wABQI0b6MpznquesfDYBoJsF8fOwGK6Adk1/MeQy8THE3BkVAYs=) 2025-09-27 21:37:43.055229 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJzWhV0jBFZFepl/Mqc3ddfBqxblm/U9pOb/lCiICMop) 2025-09-27 21:37:43.055288 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUKM38HFsBL4o2hE4ljqu61xLW0ZnL+MSsz8dZ3jxlwrJr/K8w8OvYe8TxABtl6zSN7kxX+kG8AGU4hqIqR5GFfbz2MdPUTukLUaGjdDiP5m5noVcdLbebOGpS5/Uru9cyXENXayYo2bFQ/bGFwJPVvuurm6AGJG0MFlXYe75bhzHKEPW/xwFupCQ6U9uG+V00LQ8bxP6cUdNgJqDlW0g2FIiRqReW9kgc20PMk3ZqTeXKfY+igStE6PisQaTr4Q1PAvl9Dn3nsKjra0pEJURYqebgl38LvPyqpAFHXYYQ3Ab1SZTK0X2LWCTS44LdDXORShu3hKvUB+BwEAAoEVp0QZ49vP9gw4xw8joi5S+E5vvK4vQaoMb9hhrtH2QNipauh5jVTV0Ut4OsJa6absOUFXLPn0flnJ/0Rp0j7KLbnsBuLi4nFWaZZiccRgj/1Fd9USuFz9JReA0Zff4ofnYxjkOhMGHoqnf9eGtIxfjlbU4k+PrDDS+qxnpZG8TfEwc=) 2025-09-27 21:37:43.055311 | orchestrator | 2025-09-27 21:37:43.055322 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:37:43.055333 | orchestrator | Saturday 27 September 2025 21:37:38 +0000 (0:00:01.033) 0:00:08.603 **** 2025-09-27 21:37:43.055344 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDg16x591R/ezJbA886ryHpqa8F/r3TzvAegw5GYLMJTKoZer3Wo0upU2pt37NnPn5OqlOnrdTIsajhYtrgtrLw=) 2025-09-27 21:37:43.055475 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrb7QagdkaPBeFWjqS03Vj63Vxkhx6LCm3Sz2gDd3B1F1jiKhWt6u5VnR0NLAimtK6/pHPvrGWG34y3fLqMYQadO6NKCHM7luKGZGC0BvFfCncLz2++C+wU/s6FogOqZqWBnaeNI9PXIryvswIyzs5qSU9pyH936NKwp/OgSlgPYPu6gilqS8DXJWNDfLpH2C2wHE45jHZlEAbTp+o5jgabFe60bs7uL0k5Yy3mZQlCSaTQdt2B6kJ4dJSkVOw0BFdLdtnLWFiwHdNAvbBivZoJuwk7GAGhLfc5yaaXuqNucEjHizIWm0oVRTcB0obtRv1NXkLCdhfE8KpQNwIIYip+oh3YrXdRfdiVjTOtrGoRwLCmucPRQsd5UcWcM0oCr/Oo/5irkF8hdA+CSFe6YJN+dF8fW0+G9YPjCYAotJhrNAC7o18tOBB55uKeQZoTcytG8OgQGrd036ri3FCuT08HaWC1I0GhjMxalK4w3NY8ESrRWOuouNc+YMD0JAXKzM=) 2025-09-27 21:37:43.055488 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJf8YRhwgrv1kncn7mxk/PDLGi4rMi7AB2aANyM6Wmtd) 2025-09-27 21:37:43.055499 | orchestrator | 2025-09-27 21:37:43.055511 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:37:43.055522 | orchestrator | Saturday 27 September 2025 21:37:39 +0000 (0:00:01.050) 0:00:09.653 **** 2025-09-27 21:37:43.055533 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC5ThTu+f8qw5kVHD7S6VojVGxkx1Ybpt0+/4JOnv4hv) 2025-09-27 21:37:43.055544 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDWOwm+1HauY9XRz6Fo9k8CoH0UfypDtO8oOgKT1yeSlxj/u11/WZ0t8vKkuMi3g1SQGxlZDDff4xaabUhP3dsDCJnnRyDN+qEvoDTSqlZERn8AgUiaRHHwwucnSjjG76ILr4JdBuZjPEM4rSRg7dYD/LnrH0UpAIOx7GQSCgEYkyboAGaLsjdP2Xqhn93Oiz84rE5hnbSnqS1r2E3f/eZ2UmdMV8MkirBs8EtNgvQuhvfTarO+ybxSFMjwn3JwWErmpzO96MtIyypx537vo6B0EGy/YIy6eou1HyOSkJd+33KYEThVMgkHkGGnvQZe0WQ5LkEPbVWtLBMEHWRhnnEJv644p5dFJzY1A5FVg3ytsPcHwhfZzsLWESXi63F97PzlPxThDkiCKUapeDsZttPneisJLxa9nYyOGqHXm7GvsneJP/8v0u6+58A9gdZj7e2H3X/7tactzAwVB+qBBUSq1P53rp4JweR5xcaXlrvesQJvs016rLUsoS1S5n8PAL8=) 2025-09-27 21:37:43.055557 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP7FiWFiJt2nt4mHe1jkA7idoh0wT6zeE/KvoYPzD6KbCoRAR0nOLl8mkb2G62Q2DRsnmyrNKHba0Pb0GW4uqug=) 2025-09-27 21:37:43.055568 | orchestrator | 2025-09-27 21:37:43.055579 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:37:43.055590 | orchestrator | Saturday 27 September 2025 21:37:40 +0000 (0:00:01.042) 0:00:10.696 **** 2025-09-27 21:37:43.055600 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILpNAH/wtHEdMyQVOk7e7z7xupGkIelJpio9QzPAhJ6F) 2025-09-27 21:37:43.055610 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOXWCySfPT3nuM+0zuSHPwupl+lHF/KbVtMH8QR83jBd4eY8Dba9DsmiTk4Iz4mXskiIXWxRdZ8pMgyXcqheUZhmtvOT4OK3KvHULaUR9rMstnQLsm2KUDa6oBtkBiIPXOohWH6qHWyhENN8iULtQk/bS56H5P4vsm2VbN+bQWRwoAulIKcPZEUuILnRD3gLAQz2b1XvgQLipAg5tkkiriJAuxRUhLkPWMXkmlK8oEEnoPIoosc59FhraaPdxAASPdwdxLWOEKcXdFwv2Bf3mYVTcc19U47j6K2ogSztf05GvawfkrA+QnHE6buI0zNqdY5mEzIIfGO4TcopRY/ATN6gXNmfSj51+e5Q42Q9aYwQVqssVxYsCeWT0Y9DJ/WrVw9SQNs07oHaYndcr9dM5+7E4C+aJvkx/3m+Y+blwbesK6Tnt2Z3MFK36cS4IP274JUgyqFrQGCthKz7fDjFkRmPnOsAb2fdqeQmOtuai+0v4jslxq37h5j5lU8UZf9dc=) 2025-09-27 21:37:43.055626 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNK2hIPRLzSwBvyIqT7IBoUG30W0T6ooFBlyxcHqfivHUauI/ClZaaCOO+5bWhmQSCPTccVHllKiHYSA1DQcyKQ=) 2025-09-27 21:37:43.055636 | orchestrator | 2025-09-27 21:37:43.055646 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:37:43.055655 | orchestrator | Saturday 27 September 2025 21:37:41 +0000 (0:00:01.051) 0:00:11.747 **** 2025-09-27 21:37:43.055674 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCGR5PfulBg3jcMxex2kpegdC0JqUNEx12763JC+sNsRG1J3Eo8g7Ha816tqpwyrHDh5SuGy56DtRgHo1fxh/TUaT4hPpsT4D+EC5HpYZrVwa3zEZFT/6xpIhpmld2jiB09NNUO8cULPv9qjFSex/Cmxiq4WfMhzGiCAgf7cCXwy/L1MNqI3RBqSfYqD8wGCo3CY+Y7PtPkzsPG2G1Y+U1U/kNzqNjjd9O2VlHRVPYhu5pIWJL05IM1y7wb5MdsjMv5W1HVnuGp4VhFdISdJIQQNHb7tvarSeDCE82q7UEu5Hy8eT+5dm0JvlteM9JJF0vcqAgujno9VMSncyLflfVwvtAe/Pl2mfn3tGQktlxTsJ4HxLozGW1OnzuEtu+TvPnjhZijf5f8WO0mb21wK81j0b87uYKikiXa1wwR6f4jhv5d1QFnswSPWhCtNdJrc9WxgbAD4Hciuyivy3pmPNR/TN63z9O0uGge69xT77DTbTH3Cy1aY9EVPXZZmNVRoDE=) 2025-09-27 21:37:53.719534 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGSGNsTxXOJqB6ZQhl8gefLaCYgMlKkj/WnG+17wvdgdR5jNcL2M3f6Mw2iMA1hWguEbwgG/XD80F0ikhokOdQc=) 2025-09-27 21:37:53.719737 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO12D8m87oIjoPQNYd91X93xdOfa74hnz69SjHATIaiz) 2025-09-27 21:37:53.719757 | orchestrator | 2025-09-27 21:37:53.719771 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:37:53.719783 | orchestrator | Saturday 27 September 2025 21:37:43 +0000 (0:00:01.057) 0:00:12.805 **** 2025-09-27 21:37:53.719797 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMrCRiTwOfkY3XvdLm23mMn/2HZ97mun8vKaMZGYJuu17WfxEfftPE9NQCfeZsBlheIeuImLx1wCzFNjeU1zeS1xb+cscbtE+HGAvKHk5h7xGd9cLeqtbPYdLkuDu745lXSBtavUoNZDsyrBL7scJwwgKQfu4Afg83SC85mmcboSbxvFpc+PAL/yVM4iy9wZdHhJ+82xJ3J4hcfkDn7JFShYWoYy7Ilx13f5TegfDZv/lcaZBBRMIoGY0gF4AtQMXG1doaPWGKze2OiCQmPSRlBo1t+5+mUeJ8/qzi/RFyqon4qa+RzlaKJloKZ1FATkjHfXPrtSYvtTpKjodK6J2YpE0ephQtw8axHBf4eyoFMhGnIY5Qeymr/3L6uI5Cd+TGiUTUveKR1xtOdJQQE+isAAMOkBnRsiDfkInNMdaSUhZW/JzysXuCaw12/hbAuCcmPkb/+fO8+yi3V2USa8o1FeAT3bVsy6+ijMHOtOjLk+auiX2fMQxBueLA0pHL2cU=) 2025-09-27 21:37:53.719811 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHzGlFVg22pz4ngqFfzhBXntSt+7gwN5hWL6jtttyYKy+XU+GOQT8QjwhO4RB6E4U3pxVfX+cZfHZnDBlTPP+AI=) 2025-09-27 21:37:53.719823 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDGtTuv3WHwbvi36JaYKgvkcmGp+Kp0VmFmo6pykZfG9) 2025-09-27 21:37:53.719834 | orchestrator | 2025-09-27 21:37:53.719845 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-27 21:37:53.719857 | orchestrator | Saturday 27 September 2025 21:37:44 +0000 (0:00:01.018) 0:00:13.823 **** 2025-09-27 21:37:53.719869 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-27 21:37:53.719880 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-27 21:37:53.719909 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-27 21:37:53.719921 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-27 21:37:53.719931 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-27 21:37:53.719943 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-27 21:37:53.719954 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-27 21:37:53.719965 | orchestrator | 2025-09-27 21:37:53.719976 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-27 21:37:53.719987 | orchestrator | Saturday 27 September 2025 21:37:49 +0000 (0:00:05.233) 0:00:19.056 **** 2025-09-27 21:37:53.720006 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-27 21:37:53.720042 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-27 21:37:53.720056 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-27 21:37:53.720068 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-27 21:37:53.720080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-27 21:37:53.720092 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-27 21:37:53.720105 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-27 21:37:53.720116 | orchestrator | 2025-09-27 21:37:53.720129 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:37:53.720142 | orchestrator | Saturday 27 September 2025 21:37:49 +0000 (0:00:00.171) 0:00:19.228 **** 2025-09-27 21:37:53.720154 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOzZSPeDmm5uOpw9w5fE7zVcssoXX3X5d3TPREGFLfDB) 2025-09-27 21:37:53.720196 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC12GaJ+3eTpNwEaqYNA6V4SapkFCoKPvZHs8t9Umu4UxUL6HuRTZ5we1vSNw24WxZIpY9+JaR0WtNVeo9AnUVNHa39qcD+v85H9PED1NrBpRMDTvUvDP/OxOq386ywtKNm9n/p53up9a9u1t/U7OS8E4spPh6uc2ayANxxwCHLN/Fj5h4IWtWfvXhVFHXihsZH/JwmY0vMxR0QeysBmZApj1ZGso+gOGRR6vz5XH46kUS3omBXJmIsVJG54V2QtTIry5l3Avv/ua27UYSHMZRkzgd6i+110C9MD2xgsrPINXsBsrWa8NeN34qN+BNSqKljqcPK2v+DqU9NebZXash45njeKm5h8v4LKYGX+yg54hWQMLy63ZCLWVfcu+nF4btMavFDv8hmfDu6Yy9Sa0GTVWpfQpiDKqT9BVAFgCU1HD2pfYwfKChkSga0YtKtdug/PfzVLTrGlHfBn34j7R0B6vICideqG9+pgAvSkhjfGQeuhqmQGznLUyOYm609sVE=) 2025-09-27 21:37:53.720210 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEvyUkifTwqC55Fxcfa1eNdbli04wCIlh6nWWp50EsRZeAV0rL8YzRpQqIzgkAu3Er55n5vwfElHX7H0BjyQQJc=) 2025-09-27 21:37:53.720223 | orchestrator | 2025-09-27 21:37:53.720271 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:37:53.720283 | orchestrator | Saturday 27 September 2025 21:37:50 +0000 (0:00:01.036) 0:00:20.265 **** 2025-09-27 21:37:53.720297 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUKM38HFsBL4o2hE4ljqu61xLW0ZnL+MSsz8dZ3jxlwrJr/K8w8OvYe8TxABtl6zSN7kxX+kG8AGU4hqIqR5GFfbz2MdPUTukLUaGjdDiP5m5noVcdLbebOGpS5/Uru9cyXENXayYo2bFQ/bGFwJPVvuurm6AGJG0MFlXYe75bhzHKEPW/xwFupCQ6U9uG+V00LQ8bxP6cUdNgJqDlW0g2FIiRqReW9kgc20PMk3ZqTeXKfY+igStE6PisQaTr4Q1PAvl9Dn3nsKjra0pEJURYqebgl38LvPyqpAFHXYYQ3Ab1SZTK0X2LWCTS44LdDXORShu3hKvUB+BwEAAoEVp0QZ49vP9gw4xw8joi5S+E5vvK4vQaoMb9hhrtH2QNipauh5jVTV0Ut4OsJa6absOUFXLPn0flnJ/0Rp0j7KLbnsBuLi4nFWaZZiccRgj/1Fd9USuFz9JReA0Zff4ofnYxjkOhMGHoqnf9eGtIxfjlbU4k+PrDDS+qxnpZG8TfEwc=) 2025-09-27 21:37:53.720310 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHDiaxJ18JAHV3ijpYzYZsrkWIoBq2yJaCd0wABQI0b6MpznquesfDYBoJsF8fOwGK6Adk1/MeQy8THE3BkVAYs=) 2025-09-27 21:37:53.720322 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJzWhV0jBFZFepl/Mqc3ddfBqxblm/U9pOb/lCiICMop) 2025-09-27 21:37:53.720343 | orchestrator | 2025-09-27 21:37:53.720356 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:37:53.720368 | orchestrator | Saturday 27 September 2025 21:37:51 +0000 (0:00:01.037) 0:00:21.303 **** 2025-09-27 21:37:53.720381 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrb7QagdkaPBeFWjqS03Vj63Vxkhx6LCm3Sz2gDd3B1F1jiKhWt6u5VnR0NLAimtK6/pHPvrGWG34y3fLqMYQadO6NKCHM7luKGZGC0BvFfCncLz2++C+wU/s6FogOqZqWBnaeNI9PXIryvswIyzs5qSU9pyH936NKwp/OgSlgPYPu6gilqS8DXJWNDfLpH2C2wHE45jHZlEAbTp+o5jgabFe60bs7uL0k5Yy3mZQlCSaTQdt2B6kJ4dJSkVOw0BFdLdtnLWFiwHdNAvbBivZoJuwk7GAGhLfc5yaaXuqNucEjHizIWm0oVRTcB0obtRv1NXkLCdhfE8KpQNwIIYip+oh3YrXdRfdiVjTOtrGoRwLCmucPRQsd5UcWcM0oCr/Oo/5irkF8hdA+CSFe6YJN+dF8fW0+G9YPjCYAotJhrNAC7o18tOBB55uKeQZoTcytG8OgQGrd036ri3FCuT08HaWC1I0GhjMxalK4w3NY8ESrRWOuouNc+YMD0JAXKzM=) 2025-09-27 21:37:53.720394 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDg16x591R/ezJbA886ryHpqa8F/r3TzvAegw5GYLMJTKoZer3Wo0upU2pt37NnPn5OqlOnrdTIsajhYtrgtrLw=) 2025-09-27 21:37:53.720407 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJf8YRhwgrv1kncn7mxk/PDLGi4rMi7AB2aANyM6Wmtd) 2025-09-27 21:37:53.720419 | orchestrator | 2025-09-27 21:37:53.720432 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:37:53.720445 | orchestrator | Saturday 27 September 2025 21:37:52 +0000 (0:00:01.056) 0:00:22.359 **** 2025-09-27 21:37:53.720456 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC5ThTu+f8qw5kVHD7S6VojVGxkx1Ybpt0+/4JOnv4hv) 2025-09-27 21:37:53.720474 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDWOwm+1HauY9XRz6Fo9k8CoH0UfypDtO8oOgKT1yeSlxj/u11/WZ0t8vKkuMi3g1SQGxlZDDff4xaabUhP3dsDCJnnRyDN+qEvoDTSqlZERn8AgUiaRHHwwucnSjjG76ILr4JdBuZjPEM4rSRg7dYD/LnrH0UpAIOx7GQSCgEYkyboAGaLsjdP2Xqhn93Oiz84rE5hnbSnqS1r2E3f/eZ2UmdMV8MkirBs8EtNgvQuhvfTarO+ybxSFMjwn3JwWErmpzO96MtIyypx537vo6B0EGy/YIy6eou1HyOSkJd+33KYEThVMgkHkGGnvQZe0WQ5LkEPbVWtLBMEHWRhnnEJv644p5dFJzY1A5FVg3ytsPcHwhfZzsLWESXi63F97PzlPxThDkiCKUapeDsZttPneisJLxa9nYyOGqHXm7GvsneJP/8v0u6+58A9gdZj7e2H3X/7tactzAwVB+qBBUSq1P53rp4JweR5xcaXlrvesQJvs016rLUsoS1S5n8PAL8=) 2025-09-27 21:37:53.720497 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP7FiWFiJt2nt4mHe1jkA7idoh0wT6zeE/KvoYPzD6KbCoRAR0nOLl8mkb2G62Q2DRsnmyrNKHba0Pb0GW4uqug=) 2025-09-27 21:37:58.062143 | orchestrator | 2025-09-27 21:37:58.062281 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:37:58.062299 | orchestrator | Saturday 27 September 2025 21:37:53 +0000 (0:00:01.108) 0:00:23.468 **** 2025-09-27 21:37:58.062315 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOXWCySfPT3nuM+0zuSHPwupl+lHF/KbVtMH8QR83jBd4eY8Dba9DsmiTk4Iz4mXskiIXWxRdZ8pMgyXcqheUZhmtvOT4OK3KvHULaUR9rMstnQLsm2KUDa6oBtkBiIPXOohWH6qHWyhENN8iULtQk/bS56H5P4vsm2VbN+bQWRwoAulIKcPZEUuILnRD3gLAQz2b1XvgQLipAg5tkkiriJAuxRUhLkPWMXkmlK8oEEnoPIoosc59FhraaPdxAASPdwdxLWOEKcXdFwv2Bf3mYVTcc19U47j6K2ogSztf05GvawfkrA+QnHE6buI0zNqdY5mEzIIfGO4TcopRY/ATN6gXNmfSj51+e5Q42Q9aYwQVqssVxYsCeWT0Y9DJ/WrVw9SQNs07oHaYndcr9dM5+7E4C+aJvkx/3m+Y+blwbesK6Tnt2Z3MFK36cS4IP274JUgyqFrQGCthKz7fDjFkRmPnOsAb2fdqeQmOtuai+0v4jslxq37h5j5lU8UZf9dc=) 2025-09-27 21:37:58.062331 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILpNAH/wtHEdMyQVOk7e7z7xupGkIelJpio9QzPAhJ6F) 2025-09-27 21:37:58.062345 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNK2hIPRLzSwBvyIqT7IBoUG30W0T6ooFBlyxcHqfivHUauI/ClZaaCOO+5bWhmQSCPTccVHllKiHYSA1DQcyKQ=) 2025-09-27 21:37:58.062356 | orchestrator | 2025-09-27 21:37:58.062368 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:37:58.062406 | orchestrator | Saturday 27 September 2025 21:37:54 +0000 (0:00:01.085) 0:00:24.553 **** 2025-09-27 21:37:58.062418 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO12D8m87oIjoPQNYd91X93xdOfa74hnz69SjHATIaiz) 2025-09-27 21:37:58.062429 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCGR5PfulBg3jcMxex2kpegdC0JqUNEx12763JC+sNsRG1J3Eo8g7Ha816tqpwyrHDh5SuGy56DtRgHo1fxh/TUaT4hPpsT4D+EC5HpYZrVwa3zEZFT/6xpIhpmld2jiB09NNUO8cULPv9qjFSex/Cmxiq4WfMhzGiCAgf7cCXwy/L1MNqI3RBqSfYqD8wGCo3CY+Y7PtPkzsPG2G1Y+U1U/kNzqNjjd9O2VlHRVPYhu5pIWJL05IM1y7wb5MdsjMv5W1HVnuGp4VhFdISdJIQQNHb7tvarSeDCE82q7UEu5Hy8eT+5dm0JvlteM9JJF0vcqAgujno9VMSncyLflfVwvtAe/Pl2mfn3tGQktlxTsJ4HxLozGW1OnzuEtu+TvPnjhZijf5f8WO0mb21wK81j0b87uYKikiXa1wwR6f4jhv5d1QFnswSPWhCtNdJrc9WxgbAD4Hciuyivy3pmPNR/TN63z9O0uGge69xT77DTbTH3Cy1aY9EVPXZZmNVRoDE=) 2025-09-27 21:37:58.062441 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGSGNsTxXOJqB6ZQhl8gefLaCYgMlKkj/WnG+17wvdgdR5jNcL2M3f6Mw2iMA1hWguEbwgG/XD80F0ikhokOdQc=) 2025-09-27 21:37:58.062452 | orchestrator | 2025-09-27 21:37:58.062464 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:37:58.062475 | orchestrator | Saturday 27 September 2025 21:37:55 +0000 (0:00:01.026) 0:00:25.579 **** 2025-09-27 21:37:58.062487 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMrCRiTwOfkY3XvdLm23mMn/2HZ97mun8vKaMZGYJuu17WfxEfftPE9NQCfeZsBlheIeuImLx1wCzFNjeU1zeS1xb+cscbtE+HGAvKHk5h7xGd9cLeqtbPYdLkuDu745lXSBtavUoNZDsyrBL7scJwwgKQfu4Afg83SC85mmcboSbxvFpc+PAL/yVM4iy9wZdHhJ+82xJ3J4hcfkDn7JFShYWoYy7Ilx13f5TegfDZv/lcaZBBRMIoGY0gF4AtQMXG1doaPWGKze2OiCQmPSRlBo1t+5+mUeJ8/qzi/RFyqon4qa+RzlaKJloKZ1FATkjHfXPrtSYvtTpKjodK6J2YpE0ephQtw8axHBf4eyoFMhGnIY5Qeymr/3L6uI5Cd+TGiUTUveKR1xtOdJQQE+isAAMOkBnRsiDfkInNMdaSUhZW/JzysXuCaw12/hbAuCcmPkb/+fO8+yi3V2USa8o1FeAT3bVsy6+ijMHOtOjLk+auiX2fMQxBueLA0pHL2cU=) 2025-09-27 21:37:58.062498 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHzGlFVg22pz4ngqFfzhBXntSt+7gwN5hWL6jtttyYKy+XU+GOQT8QjwhO4RB6E4U3pxVfX+cZfHZnDBlTPP+AI=) 2025-09-27 21:37:58.062510 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDGtTuv3WHwbvi36JaYKgvkcmGp+Kp0VmFmo6pykZfG9) 2025-09-27 21:37:58.062521 | orchestrator | 2025-09-27 21:37:58.062532 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-27 21:37:58.062543 | orchestrator | Saturday 27 September 2025 21:37:56 +0000 (0:00:01.049) 0:00:26.629 **** 2025-09-27 21:37:58.062554 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-27 21:37:58.062566 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-27 21:37:58.062576 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-27 21:37:58.062587 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-27 21:37:58.062598 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-27 21:37:58.062610 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-27 21:37:58.062623 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-27 21:37:58.062636 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:37:58.062648 | orchestrator | 2025-09-27 21:37:58.062680 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-27 21:37:58.062693 | orchestrator | Saturday 27 September 2025 21:37:57 +0000 (0:00:00.156) 0:00:26.785 **** 2025-09-27 21:37:58.062705 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:37:58.062717 | orchestrator | 2025-09-27 21:37:58.062730 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-27 21:37:58.062742 | orchestrator | Saturday 27 September 2025 21:37:57 +0000 (0:00:00.059) 0:00:26.845 **** 2025-09-27 21:37:58.062755 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:37:58.062775 | orchestrator | 2025-09-27 21:37:58.062788 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-27 21:37:58.062801 | orchestrator | Saturday 27 September 2025 21:37:57 +0000 (0:00:00.059) 0:00:26.904 **** 2025-09-27 21:37:58.062813 | orchestrator | changed: [testbed-manager] 2025-09-27 21:37:58.062826 | orchestrator | 2025-09-27 21:37:58.062839 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:37:58.062851 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-27 21:37:58.062865 | orchestrator | 2025-09-27 21:37:58.062877 | orchestrator | 2025-09-27 21:37:58.062890 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:37:58.062902 | orchestrator | Saturday 27 September 2025 21:37:57 +0000 (0:00:00.674) 0:00:27.578 **** 2025-09-27 21:37:58.062915 | orchestrator | =============================================================================== 2025-09-27 21:37:58.062927 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.00s 2025-09-27 21:37:58.062940 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.23s 2025-09-27 21:37:58.062954 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-09-27 21:37:58.062966 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-27 21:37:58.062977 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-09-27 21:37:58.062987 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-27 21:37:58.062998 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-27 21:37:58.063009 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-27 21:37:58.063020 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-27 21:37:58.063030 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-27 21:37:58.063041 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-27 21:37:58.063051 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-27 21:37:58.063080 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-27 21:37:58.063091 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-27 21:37:58.063102 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-27 21:37:58.063117 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-09-27 21:37:58.063128 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.67s 2025-09-27 21:37:58.063138 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-09-27 21:37:58.063150 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-09-27 21:37:58.063161 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-09-27 21:37:58.354609 | orchestrator | + osism apply squid 2025-09-27 21:38:10.335273 | orchestrator | 2025-09-27 21:38:10 | INFO  | Task 163b0628-42f3-4f9a-a3d9-b188f5850d0b (squid) was prepared for execution. 2025-09-27 21:38:10.335394 | orchestrator | 2025-09-27 21:38:10 | INFO  | It takes a moment until task 163b0628-42f3-4f9a-a3d9-b188f5850d0b (squid) has been started and output is visible here. 2025-09-27 21:40:03.886468 | orchestrator | 2025-09-27 21:40:03.886585 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-27 21:40:03.886599 | orchestrator | 2025-09-27 21:40:03.886608 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-27 21:40:03.886617 | orchestrator | Saturday 27 September 2025 21:38:14 +0000 (0:00:00.165) 0:00:00.165 **** 2025-09-27 21:40:03.886651 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-27 21:40:03.886662 | orchestrator | 2025-09-27 21:40:03.886670 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-27 21:40:03.886679 | orchestrator | Saturday 27 September 2025 21:38:14 +0000 (0:00:00.091) 0:00:00.257 **** 2025-09-27 21:40:03.886687 | orchestrator | ok: [testbed-manager] 2025-09-27 21:40:03.886697 | orchestrator | 2025-09-27 21:40:03.886705 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-27 21:40:03.886713 | orchestrator | Saturday 27 September 2025 21:38:15 +0000 (0:00:01.470) 0:00:01.728 **** 2025-09-27 21:40:03.886722 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-27 21:40:03.886730 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-27 21:40:03.886738 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-27 21:40:03.886746 | orchestrator | 2025-09-27 21:40:03.886753 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-27 21:40:03.886761 | orchestrator | Saturday 27 September 2025 21:38:16 +0000 (0:00:01.152) 0:00:02.880 **** 2025-09-27 21:40:03.886769 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-27 21:40:03.886777 | orchestrator | 2025-09-27 21:40:03.886785 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-27 21:40:03.886792 | orchestrator | Saturday 27 September 2025 21:38:17 +0000 (0:00:01.091) 0:00:03.972 **** 2025-09-27 21:40:03.886800 | orchestrator | ok: [testbed-manager] 2025-09-27 21:40:03.886808 | orchestrator | 2025-09-27 21:40:03.886816 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-27 21:40:03.886824 | orchestrator | Saturday 27 September 2025 21:38:18 +0000 (0:00:00.366) 0:00:04.339 **** 2025-09-27 21:40:03.886831 | orchestrator | changed: [testbed-manager] 2025-09-27 21:40:03.886854 | orchestrator | 2025-09-27 21:40:03.886862 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-27 21:40:03.886870 | orchestrator | Saturday 27 September 2025 21:38:19 +0000 (0:00:00.906) 0:00:05.246 **** 2025-09-27 21:40:03.886878 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-27 21:40:03.886886 | orchestrator | ok: [testbed-manager] 2025-09-27 21:40:03.886894 | orchestrator | 2025-09-27 21:40:03.886902 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-27 21:40:03.886910 | orchestrator | Saturday 27 September 2025 21:38:50 +0000 (0:00:31.575) 0:00:36.821 **** 2025-09-27 21:40:03.886917 | orchestrator | changed: [testbed-manager] 2025-09-27 21:40:03.886925 | orchestrator | 2025-09-27 21:40:03.886933 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-27 21:40:03.886941 | orchestrator | Saturday 27 September 2025 21:39:02 +0000 (0:00:12.030) 0:00:48.852 **** 2025-09-27 21:40:03.886949 | orchestrator | Pausing for 60 seconds 2025-09-27 21:40:03.886957 | orchestrator | changed: [testbed-manager] 2025-09-27 21:40:03.886966 | orchestrator | 2025-09-27 21:40:03.886974 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-27 21:40:03.886981 | orchestrator | Saturday 27 September 2025 21:40:02 +0000 (0:01:00.079) 0:01:48.931 **** 2025-09-27 21:40:03.886989 | orchestrator | ok: [testbed-manager] 2025-09-27 21:40:03.886997 | orchestrator | 2025-09-27 21:40:03.887005 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-27 21:40:03.887014 | orchestrator | Saturday 27 September 2025 21:40:03 +0000 (0:00:00.083) 0:01:49.015 **** 2025-09-27 21:40:03.887022 | orchestrator | changed: [testbed-manager] 2025-09-27 21:40:03.887031 | orchestrator | 2025-09-27 21:40:03.887040 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:40:03.887049 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:40:03.887065 | orchestrator | 2025-09-27 21:40:03.887074 | orchestrator | 2025-09-27 21:40:03.887083 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:40:03.887092 | orchestrator | Saturday 27 September 2025 21:40:03 +0000 (0:00:00.596) 0:01:49.611 **** 2025-09-27 21:40:03.887123 | orchestrator | =============================================================================== 2025-09-27 21:40:03.887132 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-09-27 21:40:03.887141 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.58s 2025-09-27 21:40:03.887160 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.03s 2025-09-27 21:40:03.887170 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.47s 2025-09-27 21:40:03.887179 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.15s 2025-09-27 21:40:03.887188 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.09s 2025-09-27 21:40:03.887197 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.91s 2025-09-27 21:40:03.887206 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2025-09-27 21:40:03.887215 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-09-27 21:40:03.887224 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-09-27 21:40:03.887233 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2025-09-27 21:40:04.155717 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-27 21:40:04.156515 | orchestrator | ++ semver latest 9.0.0 2025-09-27 21:40:04.203220 | orchestrator | + [[ -1 -lt 0 ]] 2025-09-27 21:40:04.203311 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-27 21:40:04.204055 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-27 21:40:16.220253 | orchestrator | 2025-09-27 21:40:16 | INFO  | Task dee900b9-23c2-42d3-9fc4-df287bf509d3 (operator) was prepared for execution. 2025-09-27 21:40:16.220357 | orchestrator | 2025-09-27 21:40:16 | INFO  | It takes a moment until task dee900b9-23c2-42d3-9fc4-df287bf509d3 (operator) has been started and output is visible here. 2025-09-27 21:40:31.979712 | orchestrator | 2025-09-27 21:40:31.979833 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-27 21:40:31.979850 | orchestrator | 2025-09-27 21:40:31.979863 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-27 21:40:31.979874 | orchestrator | Saturday 27 September 2025 21:40:20 +0000 (0:00:00.152) 0:00:00.152 **** 2025-09-27 21:40:31.979886 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:40:31.979899 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:40:31.979910 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:40:31.979921 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:40:31.979932 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:40:31.979942 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:40:31.979953 | orchestrator | 2025-09-27 21:40:31.979964 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-27 21:40:31.979976 | orchestrator | Saturday 27 September 2025 21:40:23 +0000 (0:00:03.380) 0:00:03.532 **** 2025-09-27 21:40:31.979986 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:40:31.979997 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:40:31.980009 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:40:31.980019 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:40:31.980030 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:40:31.980041 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:40:31.980052 | orchestrator | 2025-09-27 21:40:31.980066 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-27 21:40:31.980148 | orchestrator | 2025-09-27 21:40:31.980169 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-27 21:40:31.980180 | orchestrator | Saturday 27 September 2025 21:40:24 +0000 (0:00:00.707) 0:00:04.239 **** 2025-09-27 21:40:31.980191 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:40:31.980238 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:40:31.980260 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:40:31.980278 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:40:31.980295 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:40:31.980307 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:40:31.980320 | orchestrator | 2025-09-27 21:40:31.980332 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-27 21:40:31.980346 | orchestrator | Saturday 27 September 2025 21:40:24 +0000 (0:00:00.145) 0:00:04.385 **** 2025-09-27 21:40:31.980366 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:40:31.980387 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:40:31.980403 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:40:31.980415 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:40:31.980426 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:40:31.980438 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:40:31.980450 | orchestrator | 2025-09-27 21:40:31.980462 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-27 21:40:31.980492 | orchestrator | Saturday 27 September 2025 21:40:24 +0000 (0:00:00.161) 0:00:04.547 **** 2025-09-27 21:40:31.980513 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:40:31.980545 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:40:31.980557 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:40:31.980568 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:40:31.980578 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:40:31.980589 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:40:31.980600 | orchestrator | 2025-09-27 21:40:31.980611 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-27 21:40:31.980621 | orchestrator | Saturday 27 September 2025 21:40:25 +0000 (0:00:00.644) 0:00:05.191 **** 2025-09-27 21:40:31.980632 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:40:31.980643 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:40:31.980654 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:40:31.980665 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:40:31.980675 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:40:31.980686 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:40:31.980696 | orchestrator | 2025-09-27 21:40:31.980707 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-27 21:40:31.980718 | orchestrator | Saturday 27 September 2025 21:40:26 +0000 (0:00:00.859) 0:00:06.051 **** 2025-09-27 21:40:31.980729 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-27 21:40:31.980740 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-27 21:40:31.980750 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-27 21:40:31.980761 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-27 21:40:31.980772 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-27 21:40:31.980782 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-27 21:40:31.980793 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-27 21:40:31.980804 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-27 21:40:31.980819 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-27 21:40:31.980830 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-27 21:40:31.980841 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-27 21:40:31.980852 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-27 21:40:31.980862 | orchestrator | 2025-09-27 21:40:31.980873 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-27 21:40:31.980884 | orchestrator | Saturday 27 September 2025 21:40:27 +0000 (0:00:01.314) 0:00:07.365 **** 2025-09-27 21:40:31.980896 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:40:31.980915 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:40:31.980935 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:40:31.980952 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:40:31.980963 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:40:31.980973 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:40:31.980984 | orchestrator | 2025-09-27 21:40:31.981005 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-27 21:40:31.981017 | orchestrator | Saturday 27 September 2025 21:40:28 +0000 (0:00:01.228) 0:00:08.594 **** 2025-09-27 21:40:31.981028 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-27 21:40:31.981038 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-27 21:40:31.981049 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-27 21:40:31.981061 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-27 21:40:31.981142 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-27 21:40:31.981165 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-27 21:40:31.981185 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-27 21:40:31.981205 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-27 21:40:31.981224 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-27 21:40:31.981240 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-27 21:40:31.981251 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-27 21:40:31.981261 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-27 21:40:31.981271 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-27 21:40:31.981282 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-27 21:40:31.981292 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-27 21:40:31.981303 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-27 21:40:31.981314 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-27 21:40:31.981324 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-27 21:40:31.981335 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-27 21:40:31.981346 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-27 21:40:31.981356 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-27 21:40:31.981367 | orchestrator | 2025-09-27 21:40:31.981377 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-27 21:40:31.981389 | orchestrator | Saturday 27 September 2025 21:40:29 +0000 (0:00:01.389) 0:00:09.983 **** 2025-09-27 21:40:31.981400 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:40:31.981410 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:40:31.981421 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:40:31.981431 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:40:31.981442 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:40:31.981452 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:40:31.981463 | orchestrator | 2025-09-27 21:40:31.981474 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-27 21:40:31.981484 | orchestrator | Saturday 27 September 2025 21:40:30 +0000 (0:00:00.173) 0:00:10.156 **** 2025-09-27 21:40:31.981495 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:40:31.981505 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:40:31.981516 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:40:31.981527 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:40:31.981538 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:40:31.981548 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:40:31.981559 | orchestrator | 2025-09-27 21:40:31.981569 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-27 21:40:31.981580 | orchestrator | Saturday 27 September 2025 21:40:30 +0000 (0:00:00.585) 0:00:10.742 **** 2025-09-27 21:40:31.981591 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:40:31.981601 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:40:31.981612 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:40:31.981632 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:40:31.981642 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:40:31.981653 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:40:31.981663 | orchestrator | 2025-09-27 21:40:31.981674 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-27 21:40:31.981685 | orchestrator | Saturday 27 September 2025 21:40:30 +0000 (0:00:00.170) 0:00:10.913 **** 2025-09-27 21:40:31.981696 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-27 21:40:31.981706 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-27 21:40:31.981717 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:40:31.981727 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-27 21:40:31.981738 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:40:31.981748 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:40:31.981759 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-27 21:40:31.981770 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-27 21:40:31.981780 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-27 21:40:31.981791 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:40:31.981801 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:40:31.981812 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:40:31.981822 | orchestrator | 2025-09-27 21:40:31.981833 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-27 21:40:31.981844 | orchestrator | Saturday 27 September 2025 21:40:31 +0000 (0:00:00.668) 0:00:11.581 **** 2025-09-27 21:40:31.981854 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:40:31.981865 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:40:31.981876 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:40:31.981886 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:40:31.981897 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:40:31.981908 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:40:31.981919 | orchestrator | 2025-09-27 21:40:31.981929 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-27 21:40:31.981940 | orchestrator | Saturday 27 September 2025 21:40:31 +0000 (0:00:00.142) 0:00:11.724 **** 2025-09-27 21:40:31.981951 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:40:31.981961 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:40:31.981972 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:40:31.981983 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:40:31.981993 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:40:31.982004 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:40:31.982014 | orchestrator | 2025-09-27 21:40:31.982170 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-27 21:40:31.982182 | orchestrator | Saturday 27 September 2025 21:40:31 +0000 (0:00:00.136) 0:00:11.860 **** 2025-09-27 21:40:31.982193 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:40:31.982204 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:40:31.982214 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:40:31.982225 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:40:31.982246 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:40:33.095030 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:40:33.095223 | orchestrator | 2025-09-27 21:40:33.095251 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-27 21:40:33.095274 | orchestrator | Saturday 27 September 2025 21:40:31 +0000 (0:00:00.156) 0:00:12.017 **** 2025-09-27 21:40:33.095291 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:40:33.095302 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:40:33.095313 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:40:33.095324 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:40:33.095335 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:40:33.095346 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:40:33.095358 | orchestrator | 2025-09-27 21:40:33.095369 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-27 21:40:33.095408 | orchestrator | Saturday 27 September 2025 21:40:32 +0000 (0:00:00.653) 0:00:12.670 **** 2025-09-27 21:40:33.095419 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:40:33.095429 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:40:33.095449 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:40:33.095469 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:40:33.095488 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:40:33.095508 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:40:33.095527 | orchestrator | 2025-09-27 21:40:33.095547 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:40:33.095565 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:40:33.095578 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:40:33.095589 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:40:33.095600 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:40:33.095611 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:40:33.095640 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:40:33.095652 | orchestrator | 2025-09-27 21:40:33.095662 | orchestrator | 2025-09-27 21:40:33.095673 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:40:33.095684 | orchestrator | Saturday 27 September 2025 21:40:32 +0000 (0:00:00.233) 0:00:12.904 **** 2025-09-27 21:40:33.095695 | orchestrator | =============================================================================== 2025-09-27 21:40:33.095705 | orchestrator | Gathering Facts --------------------------------------------------------- 3.38s 2025-09-27 21:40:33.095716 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.39s 2025-09-27 21:40:33.095729 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.31s 2025-09-27 21:40:33.095739 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.23s 2025-09-27 21:40:33.095750 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.86s 2025-09-27 21:40:33.095761 | orchestrator | Do not require tty for all users ---------------------------------------- 0.71s 2025-09-27 21:40:33.095771 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.67s 2025-09-27 21:40:33.095782 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2025-09-27 21:40:33.095793 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.64s 2025-09-27 21:40:33.095811 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2025-09-27 21:40:33.095821 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2025-09-27 21:40:33.095832 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2025-09-27 21:40:33.095843 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2025-09-27 21:40:33.095854 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-09-27 21:40:33.095864 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-09-27 21:40:33.095875 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.15s 2025-09-27 21:40:33.095885 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-09-27 21:40:33.095896 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2025-09-27 21:40:33.390999 | orchestrator | + osism apply --environment custom facts 2025-09-27 21:40:35.199013 | orchestrator | 2025-09-27 21:40:35 | INFO  | Trying to run play facts in environment custom 2025-09-27 21:40:45.331356 | orchestrator | 2025-09-27 21:40:45 | INFO  | Task 1a1d5e18-2076-42c2-9813-2ba55200a7e0 (facts) was prepared for execution. 2025-09-27 21:40:45.331453 | orchestrator | 2025-09-27 21:40:45 | INFO  | It takes a moment until task 1a1d5e18-2076-42c2-9813-2ba55200a7e0 (facts) has been started and output is visible here. 2025-09-27 21:41:28.715561 | orchestrator | 2025-09-27 21:41:28.715754 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-27 21:41:28.715797 | orchestrator | 2025-09-27 21:41:28.715820 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-27 21:41:28.715840 | orchestrator | Saturday 27 September 2025 21:40:49 +0000 (0:00:00.063) 0:00:00.064 **** 2025-09-27 21:41:28.715862 | orchestrator | ok: [testbed-manager] 2025-09-27 21:41:28.715883 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:28.715905 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:28.715925 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:28.715944 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:28.715964 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:28.715984 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:28.716001 | orchestrator | 2025-09-27 21:41:28.716022 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-27 21:41:28.716090 | orchestrator | Saturday 27 September 2025 21:40:50 +0000 (0:00:01.359) 0:00:01.423 **** 2025-09-27 21:41:28.716114 | orchestrator | ok: [testbed-manager] 2025-09-27 21:41:28.716138 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:28.716161 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:28.716183 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:28.716205 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:28.716229 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:28.716252 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:28.716310 | orchestrator | 2025-09-27 21:41:28.716335 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-27 21:41:28.716357 | orchestrator | 2025-09-27 21:41:28.716379 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-27 21:41:28.716400 | orchestrator | Saturday 27 September 2025 21:40:51 +0000 (0:00:01.125) 0:00:02.548 **** 2025-09-27 21:41:28.716422 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:28.716441 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:28.716460 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:28.716481 | orchestrator | 2025-09-27 21:41:28.716501 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-27 21:41:28.716524 | orchestrator | Saturday 27 September 2025 21:40:51 +0000 (0:00:00.091) 0:00:02.640 **** 2025-09-27 21:41:28.716545 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:28.716566 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:28.716585 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:28.716605 | orchestrator | 2025-09-27 21:41:28.716626 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-27 21:41:28.716647 | orchestrator | Saturday 27 September 2025 21:40:51 +0000 (0:00:00.175) 0:00:02.815 **** 2025-09-27 21:41:28.716668 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:28.716689 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:28.716710 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:28.716730 | orchestrator | 2025-09-27 21:41:28.716750 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-27 21:41:28.716771 | orchestrator | Saturday 27 September 2025 21:40:51 +0000 (0:00:00.177) 0:00:02.993 **** 2025-09-27 21:41:28.716792 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:28.716853 | orchestrator | 2025-09-27 21:41:28.716876 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-27 21:41:28.716895 | orchestrator | Saturday 27 September 2025 21:40:52 +0000 (0:00:00.139) 0:00:03.132 **** 2025-09-27 21:41:28.716916 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:28.716937 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:28.716958 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:28.716978 | orchestrator | 2025-09-27 21:41:28.716998 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-27 21:41:28.717019 | orchestrator | Saturday 27 September 2025 21:40:52 +0000 (0:00:00.431) 0:00:03.564 **** 2025-09-27 21:41:28.717066 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:28.717086 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:28.717105 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:28.717125 | orchestrator | 2025-09-27 21:41:28.717145 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-27 21:41:28.717166 | orchestrator | Saturday 27 September 2025 21:40:52 +0000 (0:00:00.087) 0:00:03.651 **** 2025-09-27 21:41:28.717188 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:28.717208 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:28.717228 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:28.717248 | orchestrator | 2025-09-27 21:41:28.717266 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-27 21:41:28.717288 | orchestrator | Saturday 27 September 2025 21:40:53 +0000 (0:00:01.043) 0:00:04.695 **** 2025-09-27 21:41:28.717308 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:28.717326 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:28.717348 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:28.717369 | orchestrator | 2025-09-27 21:41:28.717391 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-27 21:41:28.717410 | orchestrator | Saturday 27 September 2025 21:40:54 +0000 (0:00:00.456) 0:00:05.152 **** 2025-09-27 21:41:28.717430 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:28.717450 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:28.717471 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:28.717491 | orchestrator | 2025-09-27 21:41:28.717510 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-27 21:41:28.717531 | orchestrator | Saturday 27 September 2025 21:40:55 +0000 (0:00:01.034) 0:00:06.186 **** 2025-09-27 21:41:28.717551 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:28.717571 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:28.717590 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:28.717609 | orchestrator | 2025-09-27 21:41:28.717630 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-27 21:41:28.717648 | orchestrator | Saturday 27 September 2025 21:41:12 +0000 (0:00:17.106) 0:00:23.293 **** 2025-09-27 21:41:28.717691 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:28.717712 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:28.717732 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:28.717753 | orchestrator | 2025-09-27 21:41:28.717774 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-27 21:41:28.717826 | orchestrator | Saturday 27 September 2025 21:41:12 +0000 (0:00:00.082) 0:00:23.375 **** 2025-09-27 21:41:28.717848 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:28.717869 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:28.717889 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:28.717906 | orchestrator | 2025-09-27 21:41:28.717923 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-27 21:41:28.717940 | orchestrator | Saturday 27 September 2025 21:41:19 +0000 (0:00:07.506) 0:00:30.881 **** 2025-09-27 21:41:28.717957 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:28.717978 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:28.717999 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:28.718192 | orchestrator | 2025-09-27 21:41:28.718222 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-27 21:41:28.718266 | orchestrator | Saturday 27 September 2025 21:41:20 +0000 (0:00:00.435) 0:00:31.317 **** 2025-09-27 21:41:28.718286 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-27 21:41:28.718307 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-27 21:41:28.718328 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-27 21:41:28.718348 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-27 21:41:28.718368 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-27 21:41:28.718388 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-27 21:41:28.718408 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-27 21:41:28.718428 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-27 21:41:28.718448 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-27 21:41:28.718468 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-27 21:41:28.718488 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-27 21:41:28.718504 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-27 21:41:28.718523 | orchestrator | 2025-09-27 21:41:28.718545 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-27 21:41:28.718565 | orchestrator | Saturday 27 September 2025 21:41:23 +0000 (0:00:03.444) 0:00:34.761 **** 2025-09-27 21:41:28.718585 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:28.718606 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:28.718626 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:28.718646 | orchestrator | 2025-09-27 21:41:28.718666 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-27 21:41:28.718684 | orchestrator | 2025-09-27 21:41:28.718702 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-27 21:41:28.718721 | orchestrator | Saturday 27 September 2025 21:41:25 +0000 (0:00:01.305) 0:00:36.066 **** 2025-09-27 21:41:28.718740 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:28.718757 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:28.718775 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:28.718794 | orchestrator | ok: [testbed-manager] 2025-09-27 21:41:28.718811 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:28.718830 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:28.718848 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:28.718865 | orchestrator | 2025-09-27 21:41:28.718880 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:41:28.718899 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:41:28.718919 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:41:28.718939 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:41:28.718956 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:41:28.719077 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:41:28.719102 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:41:28.719143 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:41:28.719160 | orchestrator | 2025-09-27 21:41:28.719178 | orchestrator | 2025-09-27 21:41:28.719212 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:41:28.719231 | orchestrator | Saturday 27 September 2025 21:41:28 +0000 (0:00:03.674) 0:00:39.741 **** 2025-09-27 21:41:28.719248 | orchestrator | =============================================================================== 2025-09-27 21:41:28.719263 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.11s 2025-09-27 21:41:28.719282 | orchestrator | Install required packages (Debian) -------------------------------------- 7.51s 2025-09-27 21:41:28.719300 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.67s 2025-09-27 21:41:28.719318 | orchestrator | Copy fact files --------------------------------------------------------- 3.44s 2025-09-27 21:41:28.719337 | orchestrator | Create custom facts directory ------------------------------------------- 1.36s 2025-09-27 21:41:28.719356 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.31s 2025-09-27 21:41:28.719391 | orchestrator | Copy fact file ---------------------------------------------------------- 1.13s 2025-09-27 21:41:28.847271 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.04s 2025-09-27 21:41:28.847358 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.03s 2025-09-27 21:41:28.847367 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-09-27 21:41:28.847375 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2025-09-27 21:41:28.847383 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2025-09-27 21:41:28.847390 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.18s 2025-09-27 21:41:28.847397 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2025-09-27 21:41:28.847405 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2025-09-27 21:41:28.847413 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2025-09-27 21:41:28.847420 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.09s 2025-09-27 21:41:28.847427 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2025-09-27 21:41:29.044641 | orchestrator | + osism apply bootstrap 2025-09-27 21:41:40.806627 | orchestrator | 2025-09-27 21:41:40 | INFO  | Task 64ffa6a6-38a5-4de9-a59b-c0f4e32bdf8b (bootstrap) was prepared for execution. 2025-09-27 21:41:40.806755 | orchestrator | 2025-09-27 21:41:40 | INFO  | It takes a moment until task 64ffa6a6-38a5-4de9-a59b-c0f4e32bdf8b (bootstrap) has been started and output is visible here. 2025-09-27 21:41:56.247137 | orchestrator | 2025-09-27 21:41:56.247269 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-27 21:41:56.247287 | orchestrator | 2025-09-27 21:41:56.247299 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-27 21:41:56.247310 | orchestrator | Saturday 27 September 2025 21:41:44 +0000 (0:00:00.158) 0:00:00.158 **** 2025-09-27 21:41:56.247322 | orchestrator | ok: [testbed-manager] 2025-09-27 21:41:56.247334 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:56.247345 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:56.247356 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:56.247367 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:56.247378 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:56.247389 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:56.247400 | orchestrator | 2025-09-27 21:41:56.247411 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-27 21:41:56.247422 | orchestrator | 2025-09-27 21:41:56.247433 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-27 21:41:56.247444 | orchestrator | Saturday 27 September 2025 21:41:44 +0000 (0:00:00.219) 0:00:00.378 **** 2025-09-27 21:41:56.247455 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:56.247465 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:56.247476 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:56.247515 | orchestrator | ok: [testbed-manager] 2025-09-27 21:41:56.247526 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:56.247536 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:56.247547 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:56.247557 | orchestrator | 2025-09-27 21:41:56.247568 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-27 21:41:56.247578 | orchestrator | 2025-09-27 21:41:56.247589 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-27 21:41:56.247600 | orchestrator | Saturday 27 September 2025 21:41:48 +0000 (0:00:03.639) 0:00:04.018 **** 2025-09-27 21:41:56.247611 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-27 21:41:56.247622 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-27 21:41:56.247632 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-27 21:41:56.247643 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-27 21:41:56.247653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 21:41:56.247664 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-27 21:41:56.247675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 21:41:56.247700 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-27 21:41:56.247711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 21:41:56.247721 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-27 21:41:56.247739 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-27 21:41:56.247758 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-27 21:41:56.247775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-27 21:41:56.247793 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-27 21:41:56.247811 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-27 21:41:56.247829 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-27 21:41:56.247841 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-27 21:41:56.247851 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-27 21:41:56.247862 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-27 21:41:56.247872 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-27 21:41:56.247883 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-27 21:41:56.247894 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:41:56.247904 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-27 21:41:56.247914 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-27 21:41:56.247925 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-27 21:41:56.247936 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:56.247946 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-27 21:41:56.247957 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-27 21:41:56.247967 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-27 21:41:56.247978 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-27 21:41:56.247988 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-27 21:41:56.247998 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:56.248080 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-27 21:41:56.248093 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-27 21:41:56.248104 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-27 21:41:56.248116 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-27 21:41:56.248127 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-27 21:41:56.248139 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-27 21:41:56.248161 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-27 21:41:56.248173 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-27 21:41:56.248184 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-27 21:41:56.248196 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-27 21:41:56.248208 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-27 21:41:56.248219 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-27 21:41:56.248231 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-27 21:41:56.248243 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-27 21:41:56.248276 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:56.248289 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-27 21:41:56.248301 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:56.248312 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-27 21:41:56.248324 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-27 21:41:56.248336 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-27 21:41:56.248347 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-27 21:41:56.248359 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:56.248371 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-27 21:41:56.248382 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:56.248394 | orchestrator | 2025-09-27 21:41:56.248406 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-27 21:41:56.248417 | orchestrator | 2025-09-27 21:41:56.248429 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-27 21:41:56.248441 | orchestrator | Saturday 27 September 2025 21:41:49 +0000 (0:00:00.419) 0:00:04.438 **** 2025-09-27 21:41:56.248452 | orchestrator | ok: [testbed-manager] 2025-09-27 21:41:56.248464 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:56.248476 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:56.248488 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:56.248500 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:56.248511 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:56.248523 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:56.248534 | orchestrator | 2025-09-27 21:41:56.248546 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-27 21:41:56.248558 | orchestrator | Saturday 27 September 2025 21:41:50 +0000 (0:00:01.257) 0:00:05.695 **** 2025-09-27 21:41:56.248570 | orchestrator | ok: [testbed-manager] 2025-09-27 21:41:56.248581 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:56.248592 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:56.248604 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:56.248615 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:56.248627 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:56.248638 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:56.248650 | orchestrator | 2025-09-27 21:41:56.248661 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-27 21:41:56.248673 | orchestrator | Saturday 27 September 2025 21:41:51 +0000 (0:00:01.159) 0:00:06.854 **** 2025-09-27 21:41:56.248687 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:56.248702 | orchestrator | 2025-09-27 21:41:56.248713 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-27 21:41:56.248725 | orchestrator | Saturday 27 September 2025 21:41:51 +0000 (0:00:00.266) 0:00:07.120 **** 2025-09-27 21:41:56.248736 | orchestrator | changed: [testbed-manager] 2025-09-27 21:41:56.248748 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:56.248760 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:56.248772 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:56.248790 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:56.248802 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:56.248813 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:56.248825 | orchestrator | 2025-09-27 21:41:56.248837 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-27 21:41:56.248849 | orchestrator | Saturday 27 September 2025 21:41:53 +0000 (0:00:02.010) 0:00:09.131 **** 2025-09-27 21:41:56.248861 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:41:56.248874 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:56.248886 | orchestrator | 2025-09-27 21:41:56.248898 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-27 21:41:56.248910 | orchestrator | Saturday 27 September 2025 21:41:54 +0000 (0:00:00.260) 0:00:09.391 **** 2025-09-27 21:41:56.248922 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:56.248934 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:56.248946 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:56.248957 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:56.248969 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:56.248980 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:56.248992 | orchestrator | 2025-09-27 21:41:56.249004 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-27 21:41:56.249032 | orchestrator | Saturday 27 September 2025 21:41:55 +0000 (0:00:01.125) 0:00:10.517 **** 2025-09-27 21:41:56.249043 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:41:56.249054 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:56.249065 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:56.249075 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:56.249086 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:56.249097 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:56.249107 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:56.249117 | orchestrator | 2025-09-27 21:41:56.249128 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-27 21:41:56.249139 | orchestrator | Saturday 27 September 2025 21:41:55 +0000 (0:00:00.584) 0:00:11.102 **** 2025-09-27 21:41:56.249150 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:56.249160 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:56.249171 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:56.249182 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:56.249192 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:56.249203 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:56.249213 | orchestrator | ok: [testbed-manager] 2025-09-27 21:41:56.249224 | orchestrator | 2025-09-27 21:41:56.249244 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-27 21:41:56.249258 | orchestrator | Saturday 27 September 2025 21:41:56 +0000 (0:00:00.396) 0:00:11.498 **** 2025-09-27 21:41:56.249269 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:41:56.249280 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:56.249298 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:42:08.244220 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:42:08.244337 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:42:08.244354 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:42:08.244365 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:42:08.244377 | orchestrator | 2025-09-27 21:42:08.244390 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-27 21:42:08.244406 | orchestrator | Saturday 27 September 2025 21:41:56 +0000 (0:00:00.200) 0:00:11.699 **** 2025-09-27 21:42:08.244425 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:42:08.244497 | orchestrator | 2025-09-27 21:42:08.244518 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-27 21:42:08.244538 | orchestrator | Saturday 27 September 2025 21:41:56 +0000 (0:00:00.272) 0:00:11.972 **** 2025-09-27 21:42:08.244556 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:42:08.244576 | orchestrator | 2025-09-27 21:42:08.244594 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-27 21:42:08.244613 | orchestrator | Saturday 27 September 2025 21:41:56 +0000 (0:00:00.285) 0:00:12.257 **** 2025-09-27 21:42:08.244631 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:08.244656 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:08.244679 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:08.244697 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:42:08.244716 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:42:08.244736 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:42:08.244755 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:08.244775 | orchestrator | 2025-09-27 21:42:08.244794 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-27 21:42:08.244815 | orchestrator | Saturday 27 September 2025 21:41:58 +0000 (0:00:01.300) 0:00:13.558 **** 2025-09-27 21:42:08.244835 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:42:08.244853 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:42:08.244885 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:42:08.244903 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:42:08.244923 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:42:08.244943 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:42:08.244963 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:42:08.244981 | orchestrator | 2025-09-27 21:42:08.245023 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-27 21:42:08.245041 | orchestrator | Saturday 27 September 2025 21:41:58 +0000 (0:00:00.228) 0:00:13.786 **** 2025-09-27 21:42:08.245053 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:08.245065 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:08.245078 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:08.245090 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:08.245100 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:42:08.245111 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:42:08.245121 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:42:08.245132 | orchestrator | 2025-09-27 21:42:08.245142 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-27 21:42:08.245153 | orchestrator | Saturday 27 September 2025 21:41:58 +0000 (0:00:00.538) 0:00:14.325 **** 2025-09-27 21:42:08.245164 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:42:08.245175 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:42:08.245186 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:42:08.245197 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:42:08.245208 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:42:08.245218 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:42:08.245229 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:42:08.245239 | orchestrator | 2025-09-27 21:42:08.245250 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-27 21:42:08.245262 | orchestrator | Saturday 27 September 2025 21:41:59 +0000 (0:00:00.255) 0:00:14.581 **** 2025-09-27 21:42:08.245273 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:08.245283 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:42:08.245294 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:42:08.245305 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:42:08.245315 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:42:08.245326 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:42:08.245336 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:42:08.245359 | orchestrator | 2025-09-27 21:42:08.245370 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-27 21:42:08.245381 | orchestrator | Saturday 27 September 2025 21:41:59 +0000 (0:00:00.525) 0:00:15.106 **** 2025-09-27 21:42:08.245391 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:08.245402 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:42:08.245413 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:42:08.245423 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:42:08.245434 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:42:08.245444 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:42:08.245455 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:42:08.245466 | orchestrator | 2025-09-27 21:42:08.245476 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-27 21:42:08.245487 | orchestrator | Saturday 27 September 2025 21:42:00 +0000 (0:00:01.069) 0:00:16.176 **** 2025-09-27 21:42:08.245497 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:08.245508 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:42:08.245519 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:08.245529 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:08.245540 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:08.245551 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:42:08.245567 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:42:08.245585 | orchestrator | 2025-09-27 21:42:08.245603 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-27 21:42:08.245628 | orchestrator | Saturday 27 September 2025 21:42:02 +0000 (0:00:01.326) 0:00:17.502 **** 2025-09-27 21:42:08.245678 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:42:08.245698 | orchestrator | 2025-09-27 21:42:08.245715 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-27 21:42:08.245727 | orchestrator | Saturday 27 September 2025 21:42:02 +0000 (0:00:00.345) 0:00:17.847 **** 2025-09-27 21:42:08.245746 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:42:08.245764 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:42:08.245782 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:42:08.245809 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:42:08.245829 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:42:08.245846 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:42:08.245861 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:42:08.245880 | orchestrator | 2025-09-27 21:42:08.245898 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-27 21:42:08.245917 | orchestrator | Saturday 27 September 2025 21:42:03 +0000 (0:00:01.240) 0:00:19.088 **** 2025-09-27 21:42:08.245936 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:08.245954 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:08.245971 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:08.245982 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:08.245993 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:42:08.246118 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:42:08.246131 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:42:08.246141 | orchestrator | 2025-09-27 21:42:08.246153 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-27 21:42:08.246164 | orchestrator | Saturday 27 September 2025 21:42:03 +0000 (0:00:00.206) 0:00:19.294 **** 2025-09-27 21:42:08.246175 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:08.246186 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:08.246196 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:08.246207 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:08.246217 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:42:08.246228 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:42:08.246238 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:42:08.246249 | orchestrator | 2025-09-27 21:42:08.246260 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-27 21:42:08.246282 | orchestrator | Saturday 27 September 2025 21:42:04 +0000 (0:00:00.238) 0:00:19.532 **** 2025-09-27 21:42:08.246293 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:08.246304 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:08.246322 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:08.246333 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:08.246344 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:42:08.246355 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:42:08.246365 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:42:08.246376 | orchestrator | 2025-09-27 21:42:08.246387 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-27 21:42:08.246398 | orchestrator | Saturday 27 September 2025 21:42:04 +0000 (0:00:00.238) 0:00:19.771 **** 2025-09-27 21:42:08.246410 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:42:08.246423 | orchestrator | 2025-09-27 21:42:08.246434 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-27 21:42:08.246445 | orchestrator | Saturday 27 September 2025 21:42:04 +0000 (0:00:00.272) 0:00:20.044 **** 2025-09-27 21:42:08.246456 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:08.246467 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:08.246478 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:08.246488 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:08.246499 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:42:08.246509 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:42:08.246520 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:42:08.246531 | orchestrator | 2025-09-27 21:42:08.246541 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-27 21:42:08.246552 | orchestrator | Saturday 27 September 2025 21:42:05 +0000 (0:00:00.539) 0:00:20.584 **** 2025-09-27 21:42:08.246563 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:42:08.246574 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:42:08.246585 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:42:08.246595 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:42:08.246606 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:42:08.246617 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:42:08.246627 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:42:08.246638 | orchestrator | 2025-09-27 21:42:08.246649 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-27 21:42:08.246660 | orchestrator | Saturday 27 September 2025 21:42:05 +0000 (0:00:00.235) 0:00:20.820 **** 2025-09-27 21:42:08.246670 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:08.246681 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:08.246697 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:08.246738 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:08.246761 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:42:08.246786 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:42:08.246803 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:42:08.246819 | orchestrator | 2025-09-27 21:42:08.246837 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-27 21:42:08.246855 | orchestrator | Saturday 27 September 2025 21:42:06 +0000 (0:00:01.061) 0:00:21.881 **** 2025-09-27 21:42:08.246873 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:08.246892 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:08.246912 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:08.246930 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:08.246948 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:42:08.246967 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:42:08.246985 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:42:08.247056 | orchestrator | 2025-09-27 21:42:08.247070 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-27 21:42:08.247080 | orchestrator | Saturday 27 September 2025 21:42:07 +0000 (0:00:00.591) 0:00:22.473 **** 2025-09-27 21:42:08.247102 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:08.247113 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:08.247123 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:08.247134 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:08.247159 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:42:46.651044 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:42:46.651167 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:42:46.651184 | orchestrator | 2025-09-27 21:42:46.651197 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-27 21:42:46.651210 | orchestrator | Saturday 27 September 2025 21:42:08 +0000 (0:00:01.140) 0:00:23.614 **** 2025-09-27 21:42:46.651222 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:46.651234 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:46.651245 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:46.651256 | orchestrator | changed: [testbed-manager] 2025-09-27 21:42:46.651267 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:42:46.651278 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:42:46.651289 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:42:46.651300 | orchestrator | 2025-09-27 21:42:46.651311 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-27 21:42:46.651322 | orchestrator | Saturday 27 September 2025 21:42:24 +0000 (0:00:16.378) 0:00:39.992 **** 2025-09-27 21:42:46.651332 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:46.651343 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:46.651354 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:46.651364 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:46.651375 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:42:46.651386 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:42:46.651396 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:42:46.651407 | orchestrator | 2025-09-27 21:42:46.651418 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-27 21:42:46.651429 | orchestrator | Saturday 27 September 2025 21:42:24 +0000 (0:00:00.223) 0:00:40.216 **** 2025-09-27 21:42:46.651440 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:46.651451 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:46.651462 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:46.651472 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:46.651483 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:42:46.651493 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:42:46.651505 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:42:46.651517 | orchestrator | 2025-09-27 21:42:46.651529 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-27 21:42:46.651541 | orchestrator | Saturday 27 September 2025 21:42:25 +0000 (0:00:00.209) 0:00:40.425 **** 2025-09-27 21:42:46.651553 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:46.651565 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:46.651578 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:46.651590 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:46.651602 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:42:46.651614 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:42:46.651626 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:42:46.651638 | orchestrator | 2025-09-27 21:42:46.651650 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-27 21:42:46.651662 | orchestrator | Saturday 27 September 2025 21:42:25 +0000 (0:00:00.212) 0:00:40.637 **** 2025-09-27 21:42:46.651675 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:42:46.651690 | orchestrator | 2025-09-27 21:42:46.651702 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-27 21:42:46.651714 | orchestrator | Saturday 27 September 2025 21:42:25 +0000 (0:00:00.301) 0:00:40.939 **** 2025-09-27 21:42:46.651726 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:46.651738 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:46.651776 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:46.651789 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:46.651801 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:42:46.651813 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:42:46.651824 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:42:46.651836 | orchestrator | 2025-09-27 21:42:46.651848 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-27 21:42:46.651860 | orchestrator | Saturday 27 September 2025 21:42:26 +0000 (0:00:01.208) 0:00:42.147 **** 2025-09-27 21:42:46.651870 | orchestrator | changed: [testbed-manager] 2025-09-27 21:42:46.651881 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:42:46.651891 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:42:46.651902 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:42:46.651913 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:42:46.651924 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:42:46.651934 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:42:46.651945 | orchestrator | 2025-09-27 21:42:46.651955 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-27 21:42:46.651966 | orchestrator | Saturday 27 September 2025 21:42:27 +0000 (0:00:00.913) 0:00:43.060 **** 2025-09-27 21:42:46.651998 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:46.652009 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:46.652020 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:46.652030 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:46.652041 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:42:46.652051 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:42:46.652080 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:42:46.652091 | orchestrator | 2025-09-27 21:42:46.652102 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-27 21:42:46.652113 | orchestrator | Saturday 27 September 2025 21:42:28 +0000 (0:00:00.742) 0:00:43.803 **** 2025-09-27 21:42:46.652125 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:42:46.652138 | orchestrator | 2025-09-27 21:42:46.652149 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-27 21:42:46.652160 | orchestrator | Saturday 27 September 2025 21:42:28 +0000 (0:00:00.262) 0:00:44.065 **** 2025-09-27 21:42:46.652171 | orchestrator | changed: [testbed-manager] 2025-09-27 21:42:46.652182 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:42:46.652192 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:42:46.652203 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:42:46.652214 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:42:46.652224 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:42:46.652235 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:42:46.652246 | orchestrator | 2025-09-27 21:42:46.652274 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-27 21:42:46.652285 | orchestrator | Saturday 27 September 2025 21:42:29 +0000 (0:00:01.020) 0:00:45.086 **** 2025-09-27 21:42:46.652296 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:42:46.652307 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:42:46.652317 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:42:46.652328 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:42:46.652339 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:42:46.652350 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:42:46.652360 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:42:46.652371 | orchestrator | 2025-09-27 21:42:46.652382 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-27 21:42:46.652393 | orchestrator | Saturday 27 September 2025 21:42:29 +0000 (0:00:00.257) 0:00:45.343 **** 2025-09-27 21:42:46.652403 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:42:46.652414 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:42:46.652425 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:42:46.652444 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:42:46.652455 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:42:46.652466 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:42:46.652476 | orchestrator | changed: [testbed-manager] 2025-09-27 21:42:46.652487 | orchestrator | 2025-09-27 21:42:46.652497 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-27 21:42:46.652508 | orchestrator | Saturday 27 September 2025 21:42:41 +0000 (0:00:11.420) 0:00:56.764 **** 2025-09-27 21:42:46.652519 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:46.652530 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:46.652540 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:42:46.652551 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:42:46.652562 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:46.652572 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:46.652583 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:42:46.652593 | orchestrator | 2025-09-27 21:42:46.652604 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-27 21:42:46.652615 | orchestrator | Saturday 27 September 2025 21:42:42 +0000 (0:00:01.262) 0:00:58.026 **** 2025-09-27 21:42:46.652626 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:46.652637 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:46.652647 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:46.652658 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:46.652668 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:42:46.652679 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:42:46.652695 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:42:46.652706 | orchestrator | 2025-09-27 21:42:46.652717 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-27 21:42:46.652727 | orchestrator | Saturday 27 September 2025 21:42:43 +0000 (0:00:00.865) 0:00:58.892 **** 2025-09-27 21:42:46.652738 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:46.652749 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:46.652760 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:46.652770 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:46.652781 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:42:46.652792 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:42:46.652802 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:42:46.652813 | orchestrator | 2025-09-27 21:42:46.652824 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-27 21:42:46.652835 | orchestrator | Saturday 27 September 2025 21:42:43 +0000 (0:00:00.197) 0:00:59.089 **** 2025-09-27 21:42:46.652846 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:46.652856 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:46.652867 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:46.652877 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:46.652887 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:42:46.652898 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:42:46.652909 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:42:46.652919 | orchestrator | 2025-09-27 21:42:46.652930 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-27 21:42:46.652941 | orchestrator | Saturday 27 September 2025 21:42:43 +0000 (0:00:00.209) 0:00:59.299 **** 2025-09-27 21:42:46.652952 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:42:46.652963 | orchestrator | 2025-09-27 21:42:46.652994 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-27 21:42:46.653006 | orchestrator | Saturday 27 September 2025 21:42:44 +0000 (0:00:00.247) 0:00:59.546 **** 2025-09-27 21:42:46.653017 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:46.653027 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:46.653038 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:46.653049 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:42:46.653059 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:46.653079 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:42:46.653089 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:42:46.653100 | orchestrator | 2025-09-27 21:42:46.653111 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-27 21:42:46.653121 | orchestrator | Saturday 27 September 2025 21:42:45 +0000 (0:00:01.704) 0:01:01.251 **** 2025-09-27 21:42:46.653132 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:42:46.653143 | orchestrator | changed: [testbed-manager] 2025-09-27 21:42:46.653154 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:42:46.653164 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:42:46.653175 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:42:46.653186 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:42:46.653196 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:42:46.653207 | orchestrator | 2025-09-27 21:42:46.653218 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-27 21:42:46.653229 | orchestrator | Saturday 27 September 2025 21:42:46 +0000 (0:00:00.521) 0:01:01.772 **** 2025-09-27 21:42:46.653240 | orchestrator | ok: [testbed-manager] 2025-09-27 21:42:46.653250 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:42:46.653261 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:42:46.653272 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:42:46.653283 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:42:46.653293 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:42:46.653304 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:42:46.653315 | orchestrator | 2025-09-27 21:42:46.653333 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-27 21:45:01.409848 | orchestrator | Saturday 27 September 2025 21:42:46 +0000 (0:00:00.249) 0:01:02.021 **** 2025-09-27 21:45:01.410010 | orchestrator | ok: [testbed-manager] 2025-09-27 21:45:01.410066 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:45:01.410073 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:45:01.410079 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:45:01.410085 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:45:01.410091 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:45:01.410097 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:45:01.410103 | orchestrator | 2025-09-27 21:45:01.410110 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-27 21:45:01.410116 | orchestrator | Saturday 27 September 2025 21:42:47 +0000 (0:00:01.275) 0:01:03.297 **** 2025-09-27 21:45:01.410122 | orchestrator | changed: [testbed-manager] 2025-09-27 21:45:01.410130 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:45:01.410135 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:45:01.410141 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:45:01.410147 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:45:01.410153 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:45:01.410159 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:45:01.410165 | orchestrator | 2025-09-27 21:45:01.410172 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-27 21:45:01.410178 | orchestrator | Saturday 27 September 2025 21:42:49 +0000 (0:00:01.741) 0:01:05.038 **** 2025-09-27 21:45:01.410184 | orchestrator | ok: [testbed-manager] 2025-09-27 21:45:01.410190 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:45:01.410196 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:45:01.410201 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:45:01.410207 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:45:01.410213 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:45:01.410219 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:45:01.410225 | orchestrator | 2025-09-27 21:45:01.410231 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-27 21:45:01.410237 | orchestrator | Saturday 27 September 2025 21:42:52 +0000 (0:00:02.485) 0:01:07.524 **** 2025-09-27 21:45:01.410243 | orchestrator | ok: [testbed-manager] 2025-09-27 21:45:01.410249 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:45:01.410254 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:45:01.410260 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:45:01.410265 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:45:01.410296 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:45:01.410302 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:45:01.410307 | orchestrator | 2025-09-27 21:45:01.410313 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-27 21:45:01.410333 | orchestrator | Saturday 27 September 2025 21:43:31 +0000 (0:00:38.931) 0:01:46.455 **** 2025-09-27 21:45:01.410339 | orchestrator | changed: [testbed-manager] 2025-09-27 21:45:01.410345 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:45:01.410350 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:45:01.410356 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:45:01.410362 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:45:01.410367 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:45:01.410373 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:45:01.410378 | orchestrator | 2025-09-27 21:45:01.410384 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-27 21:45:01.410389 | orchestrator | Saturday 27 September 2025 21:44:47 +0000 (0:01:16.188) 0:03:02.644 **** 2025-09-27 21:45:01.410395 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:45:01.410400 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:45:01.410406 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:45:01.410411 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:45:01.410417 | orchestrator | ok: [testbed-manager] 2025-09-27 21:45:01.410422 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:45:01.410428 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:45:01.410433 | orchestrator | 2025-09-27 21:45:01.410439 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-27 21:45:01.410446 | orchestrator | Saturday 27 September 2025 21:44:48 +0000 (0:00:01.387) 0:03:04.031 **** 2025-09-27 21:45:01.410451 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:45:01.410457 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:45:01.410462 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:45:01.410468 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:45:01.410473 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:45:01.410479 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:45:01.410484 | orchestrator | changed: [testbed-manager] 2025-09-27 21:45:01.410489 | orchestrator | 2025-09-27 21:45:01.410495 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-27 21:45:01.410501 | orchestrator | Saturday 27 September 2025 21:45:00 +0000 (0:00:11.544) 0:03:15.575 **** 2025-09-27 21:45:01.410514 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-27 21:45:01.410526 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-27 21:45:01.410553 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-27 21:45:01.410563 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-27 21:45:01.410576 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-27 21:45:01.410582 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-27 21:45:01.410588 | orchestrator | 2025-09-27 21:45:01.410594 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-27 21:45:01.410600 | orchestrator | Saturday 27 September 2025 21:45:00 +0000 (0:00:00.348) 0:03:15.924 **** 2025-09-27 21:45:01.410606 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-27 21:45:01.410612 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-27 21:45:01.410617 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:45:01.410624 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-27 21:45:01.410629 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:45:01.410635 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-27 21:45:01.410641 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:45:01.410647 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:45:01.410652 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-27 21:45:01.410659 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-27 21:45:01.410665 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-27 21:45:01.410670 | orchestrator | 2025-09-27 21:45:01.410676 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-27 21:45:01.410682 | orchestrator | Saturday 27 September 2025 21:45:01 +0000 (0:00:00.743) 0:03:16.668 **** 2025-09-27 21:45:01.410689 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-27 21:45:01.410696 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-27 21:45:01.410702 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-27 21:45:01.410709 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-27 21:45:01.410715 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-27 21:45:01.410721 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-27 21:45:01.410727 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-27 21:45:01.410733 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-27 21:45:01.410739 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-27 21:45:01.410745 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-27 21:45:01.410751 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-27 21:45:01.410757 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-27 21:45:01.410767 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-27 21:45:01.410774 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-27 21:45:01.410779 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-27 21:45:01.410785 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-27 21:45:01.410791 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-27 21:45:01.410807 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-27 21:45:08.682407 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-27 21:45:08.682529 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-27 21:45:08.682552 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-27 21:45:08.682567 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-27 21:45:08.682582 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-27 21:45:08.682597 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-27 21:45:08.682612 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-27 21:45:08.682627 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-27 21:45:08.682642 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-27 21:45:08.682658 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-27 21:45:08.682672 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-27 21:45:08.682691 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-27 21:45:08.682707 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-27 21:45:08.682721 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-27 21:45:08.682736 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:45:08.682752 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-27 21:45:08.682784 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-27 21:45:08.682798 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-27 21:45:08.682811 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-27 21:45:08.682824 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-27 21:45:08.682837 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-27 21:45:08.682850 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-27 21:45:08.682862 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:45:08.682876 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-27 21:45:08.682888 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:45:08.682902 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:45:08.682916 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-27 21:45:08.682960 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-27 21:45:08.683004 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-27 21:45:08.683019 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-27 21:45:08.683034 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-27 21:45:08.683046 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-27 21:45:08.683059 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-27 21:45:08.683076 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-27 21:45:08.683087 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-27 21:45:08.683100 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-27 21:45:08.683112 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-27 21:45:08.683124 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-27 21:45:08.683138 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-27 21:45:08.683150 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-27 21:45:08.683162 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-27 21:45:08.683173 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-27 21:45:08.683184 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-27 21:45:08.683218 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-27 21:45:08.683230 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-27 21:45:08.683238 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-27 21:45:08.683246 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-27 21:45:08.683252 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-27 21:45:08.683259 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-27 21:45:08.683266 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-27 21:45:08.683272 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-27 21:45:08.683279 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-27 21:45:08.683285 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-27 21:45:08.683292 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-27 21:45:08.683299 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-27 21:45:08.683305 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-27 21:45:08.683312 | orchestrator | 2025-09-27 21:45:08.683319 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-27 21:45:08.683326 | orchestrator | Saturday 27 September 2025 21:45:05 +0000 (0:00:04.650) 0:03:21.318 **** 2025-09-27 21:45:08.683333 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-27 21:45:08.683340 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-27 21:45:08.683360 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-27 21:45:08.683367 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-27 21:45:08.683374 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-27 21:45:08.683380 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-27 21:45:08.683387 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-27 21:45:08.683394 | orchestrator | 2025-09-27 21:45:08.683400 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-27 21:45:08.683407 | orchestrator | Saturday 27 September 2025 21:45:07 +0000 (0:00:01.626) 0:03:22.944 **** 2025-09-27 21:45:08.683413 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-27 21:45:08.683420 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:45:08.683427 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-27 21:45:08.683434 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-27 21:45:08.683440 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:45:08.683447 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:45:08.683454 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-27 21:45:08.683461 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:45:08.683468 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-27 21:45:08.683474 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-27 21:45:08.683481 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-27 21:45:08.683488 | orchestrator | 2025-09-27 21:45:08.683494 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2025-09-27 21:45:08.683501 | orchestrator | Saturday 27 September 2025 21:45:08 +0000 (0:00:00.524) 0:03:23.468 **** 2025-09-27 21:45:08.683507 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-27 21:45:08.683514 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:45:08.683521 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-27 21:45:08.683527 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-27 21:45:08.683534 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:45:08.683541 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-27 21:45:08.683547 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:45:08.683554 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:45:08.683560 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-27 21:45:08.683567 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-27 21:45:08.683574 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-27 21:45:08.683580 | orchestrator | 2025-09-27 21:45:08.683592 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-27 21:45:21.180472 | orchestrator | Saturday 27 September 2025 21:45:08 +0000 (0:00:00.584) 0:03:24.053 **** 2025-09-27 21:45:21.180597 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-27 21:45:21.180615 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:45:21.180629 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-27 21:45:21.180669 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-27 21:45:21.180682 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:45:21.180694 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:45:21.180706 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-27 21:45:21.180717 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:45:21.180729 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-27 21:45:21.180741 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-27 21:45:21.180752 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-27 21:45:21.180764 | orchestrator | 2025-09-27 21:45:21.180776 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-27 21:45:21.180788 | orchestrator | Saturday 27 September 2025 21:45:09 +0000 (0:00:00.615) 0:03:24.669 **** 2025-09-27 21:45:21.180799 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:45:21.180811 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:45:21.180822 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:45:21.180834 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:45:21.180845 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:45:21.180856 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:45:21.180868 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:45:21.180879 | orchestrator | 2025-09-27 21:45:21.180891 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-27 21:45:21.180918 | orchestrator | Saturday 27 September 2025 21:45:09 +0000 (0:00:00.343) 0:03:25.013 **** 2025-09-27 21:45:21.180968 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:45:21.180981 | orchestrator | ok: [testbed-manager] 2025-09-27 21:45:21.180994 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:45:21.181005 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:45:21.181017 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:45:21.181029 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:45:21.181041 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:45:21.181053 | orchestrator | 2025-09-27 21:45:21.181065 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-27 21:45:21.181078 | orchestrator | Saturday 27 September 2025 21:45:15 +0000 (0:00:05.712) 0:03:30.725 **** 2025-09-27 21:45:21.181090 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-27 21:45:21.181103 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-27 21:45:21.181116 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:45:21.181128 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:45:21.181140 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-27 21:45:21.181152 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-27 21:45:21.181164 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:45:21.181176 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-27 21:45:21.181189 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:45:21.181201 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:45:21.181213 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-27 21:45:21.181225 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:45:21.181236 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-27 21:45:21.181249 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:45:21.181261 | orchestrator | 2025-09-27 21:45:21.181273 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-27 21:45:21.181285 | orchestrator | Saturday 27 September 2025 21:45:15 +0000 (0:00:00.279) 0:03:31.004 **** 2025-09-27 21:45:21.181298 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-27 21:45:21.181310 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-27 21:45:21.181321 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-27 21:45:21.181341 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-27 21:45:21.181352 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-27 21:45:21.181363 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-27 21:45:21.181373 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-27 21:45:21.181384 | orchestrator | 2025-09-27 21:45:21.181395 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-27 21:45:21.181405 | orchestrator | Saturday 27 September 2025 21:45:16 +0000 (0:00:01.009) 0:03:32.014 **** 2025-09-27 21:45:21.181417 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:45:21.181432 | orchestrator | 2025-09-27 21:45:21.181443 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-27 21:45:21.181454 | orchestrator | Saturday 27 September 2025 21:45:17 +0000 (0:00:00.539) 0:03:32.554 **** 2025-09-27 21:45:21.181464 | orchestrator | ok: [testbed-manager] 2025-09-27 21:45:21.181475 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:45:21.181486 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:45:21.181496 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:45:21.181507 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:45:21.181518 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:45:21.181528 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:45:21.181539 | orchestrator | 2025-09-27 21:45:21.181550 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-27 21:45:21.181561 | orchestrator | Saturday 27 September 2025 21:45:18 +0000 (0:00:01.208) 0:03:33.762 **** 2025-09-27 21:45:21.181572 | orchestrator | ok: [testbed-manager] 2025-09-27 21:45:21.181601 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:45:21.181613 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:45:21.181624 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:45:21.181634 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:45:21.181645 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:45:21.181655 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:45:21.181666 | orchestrator | 2025-09-27 21:45:21.181676 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-27 21:45:21.181687 | orchestrator | Saturday 27 September 2025 21:45:18 +0000 (0:00:00.583) 0:03:34.345 **** 2025-09-27 21:45:21.181698 | orchestrator | changed: [testbed-manager] 2025-09-27 21:45:21.181709 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:45:21.181720 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:45:21.181730 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:45:21.181741 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:45:21.181752 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:45:21.181762 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:45:21.181773 | orchestrator | 2025-09-27 21:45:21.181783 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-27 21:45:21.181794 | orchestrator | Saturday 27 September 2025 21:45:19 +0000 (0:00:00.667) 0:03:35.013 **** 2025-09-27 21:45:21.181805 | orchestrator | ok: [testbed-manager] 2025-09-27 21:45:21.181815 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:45:21.181826 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:45:21.181836 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:45:21.181847 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:45:21.181858 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:45:21.181868 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:45:21.181879 | orchestrator | 2025-09-27 21:45:21.181890 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-27 21:45:21.181901 | orchestrator | Saturday 27 September 2025 21:45:20 +0000 (0:00:00.548) 0:03:35.561 **** 2025-09-27 21:45:21.181917 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759008023.3856614, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:45:21.181957 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759008015.1283433, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:45:21.181977 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759007975.2325642, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:45:21.181989 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759008021.2208316, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:45:21.182001 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759008013.9965265, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:45:21.182187 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759008018.7323985, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:45:36.342157 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759008009.509387, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:45:36.342286 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:45:36.342327 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:45:36.342339 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:45:36.342350 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:45:36.342360 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:45:36.342370 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:45:36.342398 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:45:36.342409 | orchestrator | 2025-09-27 21:45:36.342422 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-27 21:45:36.342433 | orchestrator | Saturday 27 September 2025 21:45:21 +0000 (0:00:00.986) 0:03:36.547 **** 2025-09-27 21:45:36.342452 | orchestrator | changed: [testbed-manager] 2025-09-27 21:45:36.342463 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:45:36.342472 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:45:36.342482 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:45:36.342491 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:45:36.342501 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:45:36.342510 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:45:36.342520 | orchestrator | 2025-09-27 21:45:36.342529 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-27 21:45:36.342539 | orchestrator | Saturday 27 September 2025 21:45:22 +0000 (0:00:01.072) 0:03:37.620 **** 2025-09-27 21:45:36.342548 | orchestrator | changed: [testbed-manager] 2025-09-27 21:45:36.342558 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:45:36.342576 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:45:36.342592 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:45:36.342609 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:45:36.342625 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:45:36.342641 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:45:36.342656 | orchestrator | 2025-09-27 21:45:36.342672 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-27 21:45:36.342688 | orchestrator | Saturday 27 September 2025 21:45:23 +0000 (0:00:01.144) 0:03:38.764 **** 2025-09-27 21:45:36.342704 | orchestrator | changed: [testbed-manager] 2025-09-27 21:45:36.342719 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:45:36.342736 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:45:36.342753 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:45:36.342771 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:45:36.342788 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:45:36.342799 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:45:36.342810 | orchestrator | 2025-09-27 21:45:36.342821 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-27 21:45:36.342832 | orchestrator | Saturday 27 September 2025 21:45:24 +0000 (0:00:01.165) 0:03:39.930 **** 2025-09-27 21:45:36.342843 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:45:36.342854 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:45:36.342864 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:45:36.342875 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:45:36.342886 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:45:36.342897 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:45:36.342908 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:45:36.342948 | orchestrator | 2025-09-27 21:45:36.342960 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-27 21:45:36.342971 | orchestrator | Saturday 27 September 2025 21:45:24 +0000 (0:00:00.276) 0:03:40.207 **** 2025-09-27 21:45:36.342982 | orchestrator | ok: [testbed-manager] 2025-09-27 21:45:36.342993 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:45:36.343002 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:45:36.343011 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:45:36.343021 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:45:36.343030 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:45:36.343039 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:45:36.343048 | orchestrator | 2025-09-27 21:45:36.343058 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-27 21:45:36.343067 | orchestrator | Saturday 27 September 2025 21:45:25 +0000 (0:00:00.726) 0:03:40.933 **** 2025-09-27 21:45:36.343078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:45:36.343090 | orchestrator | 2025-09-27 21:45:36.343099 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-27 21:45:36.343109 | orchestrator | Saturday 27 September 2025 21:45:25 +0000 (0:00:00.379) 0:03:41.313 **** 2025-09-27 21:45:36.343127 | orchestrator | ok: [testbed-manager] 2025-09-27 21:45:36.343137 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:45:36.343146 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:45:36.343155 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:45:36.343165 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:45:36.343174 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:45:36.343184 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:45:36.343193 | orchestrator | 2025-09-27 21:45:36.343202 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-27 21:45:36.343211 | orchestrator | Saturday 27 September 2025 21:45:34 +0000 (0:00:08.201) 0:03:49.514 **** 2025-09-27 21:45:36.343221 | orchestrator | ok: [testbed-manager] 2025-09-27 21:45:36.343230 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:45:36.343240 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:45:36.343249 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:45:36.343258 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:45:36.343268 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:45:36.343277 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:45:36.343286 | orchestrator | 2025-09-27 21:45:36.343296 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-27 21:45:36.343305 | orchestrator | Saturday 27 September 2025 21:45:35 +0000 (0:00:01.230) 0:03:50.745 **** 2025-09-27 21:45:36.343315 | orchestrator | ok: [testbed-manager] 2025-09-27 21:45:36.343324 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:45:36.343333 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:45:36.343342 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:45:36.343352 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:45:36.343361 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:45:36.343370 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:45:36.343379 | orchestrator | 2025-09-27 21:45:36.343398 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-27 21:46:41.205584 | orchestrator | Saturday 27 September 2025 21:45:36 +0000 (0:00:00.962) 0:03:51.708 **** 2025-09-27 21:46:41.205656 | orchestrator | ok: [testbed-manager] 2025-09-27 21:46:41.205665 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:46:41.205672 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:46:41.205678 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:46:41.205684 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:46:41.205690 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:46:41.205696 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:46:41.205702 | orchestrator | 2025-09-27 21:46:41.205709 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-27 21:46:41.205717 | orchestrator | Saturday 27 September 2025 21:45:36 +0000 (0:00:00.402) 0:03:52.110 **** 2025-09-27 21:46:41.205723 | orchestrator | ok: [testbed-manager] 2025-09-27 21:46:41.205729 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:46:41.205735 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:46:41.205741 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:46:41.205747 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:46:41.205753 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:46:41.205759 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:46:41.205765 | orchestrator | 2025-09-27 21:46:41.205771 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-27 21:46:41.205778 | orchestrator | Saturday 27 September 2025 21:45:37 +0000 (0:00:00.300) 0:03:52.410 **** 2025-09-27 21:46:41.205784 | orchestrator | ok: [testbed-manager] 2025-09-27 21:46:41.205790 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:46:41.205796 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:46:41.205802 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:46:41.205818 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:46:41.205824 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:46:41.205830 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:46:41.205837 | orchestrator | 2025-09-27 21:46:41.205843 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-27 21:46:41.205849 | orchestrator | Saturday 27 September 2025 21:45:37 +0000 (0:00:00.314) 0:03:52.724 **** 2025-09-27 21:46:41.205903 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:46:41.205910 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:46:41.205917 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:46:41.205923 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:46:41.205929 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:46:41.205935 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:46:41.205941 | orchestrator | ok: [testbed-manager] 2025-09-27 21:46:41.205947 | orchestrator | 2025-09-27 21:46:41.205953 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-27 21:46:41.205959 | orchestrator | Saturday 27 September 2025 21:45:43 +0000 (0:00:05.702) 0:03:58.427 **** 2025-09-27 21:46:41.205966 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:46:41.205974 | orchestrator | 2025-09-27 21:46:41.205980 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-27 21:46:41.205986 | orchestrator | Saturday 27 September 2025 21:45:43 +0000 (0:00:00.399) 0:03:58.826 **** 2025-09-27 21:46:41.205993 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-27 21:46:41.205999 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-27 21:46:41.206005 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-27 21:46:41.206012 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-27 21:46:41.206055 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:46:41.206062 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-27 21:46:41.206068 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-27 21:46:41.206074 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:46:41.206080 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-27 21:46:41.206086 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-27 21:46:41.206092 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:46:41.206098 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-27 21:46:41.206104 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-27 21:46:41.206110 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:46:41.206116 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-27 21:46:41.206122 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:46:41.206129 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-27 21:46:41.206135 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:46:41.206141 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-27 21:46:41.206148 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-27 21:46:41.206155 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:46:41.206162 | orchestrator | 2025-09-27 21:46:41.206169 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-27 21:46:41.206176 | orchestrator | Saturday 27 September 2025 21:45:43 +0000 (0:00:00.323) 0:03:59.149 **** 2025-09-27 21:46:41.206183 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:46:41.206190 | orchestrator | 2025-09-27 21:46:41.206197 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-27 21:46:41.206204 | orchestrator | Saturday 27 September 2025 21:45:44 +0000 (0:00:00.371) 0:03:59.521 **** 2025-09-27 21:46:41.206211 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-27 21:46:41.206219 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:46:41.206226 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-27 21:46:41.206233 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-27 21:46:41.206245 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:46:41.206263 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-27 21:46:41.206270 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:46:41.206277 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-27 21:46:41.206285 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:46:41.206292 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:46:41.206299 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-27 21:46:41.206306 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:46:41.206313 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-27 21:46:41.206320 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:46:41.206326 | orchestrator | 2025-09-27 21:46:41.206334 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-27 21:46:41.206341 | orchestrator | Saturday 27 September 2025 21:45:44 +0000 (0:00:00.304) 0:03:59.826 **** 2025-09-27 21:46:41.206348 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:46:41.206355 | orchestrator | 2025-09-27 21:46:41.206362 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-27 21:46:41.206369 | orchestrator | Saturday 27 September 2025 21:45:44 +0000 (0:00:00.370) 0:04:00.197 **** 2025-09-27 21:46:41.206376 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:46:41.206383 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:46:41.206390 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:46:41.206397 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:46:41.206404 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:46:41.206411 | orchestrator | changed: [testbed-manager] 2025-09-27 21:46:41.206418 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:46:41.206424 | orchestrator | 2025-09-27 21:46:41.206431 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-27 21:46:41.206438 | orchestrator | Saturday 27 September 2025 21:46:16 +0000 (0:00:31.886) 0:04:32.083 **** 2025-09-27 21:46:41.206445 | orchestrator | changed: [testbed-manager] 2025-09-27 21:46:41.206452 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:46:41.206459 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:46:41.206467 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:46:41.206474 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:46:41.206481 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:46:41.206488 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:46:41.206495 | orchestrator | 2025-09-27 21:46:41.206502 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-27 21:46:41.206508 | orchestrator | Saturday 27 September 2025 21:46:24 +0000 (0:00:07.829) 0:04:39.913 **** 2025-09-27 21:46:41.206514 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:46:41.206520 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:46:41.206526 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:46:41.206533 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:46:41.206539 | orchestrator | changed: [testbed-manager] 2025-09-27 21:46:41.206545 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:46:41.206551 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:46:41.206557 | orchestrator | 2025-09-27 21:46:41.206563 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-27 21:46:41.206569 | orchestrator | Saturday 27 September 2025 21:46:31 +0000 (0:00:07.344) 0:04:47.257 **** 2025-09-27 21:46:41.206575 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:46:41.206582 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:46:41.206588 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:46:41.206594 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:46:41.206600 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:46:41.206606 | orchestrator | ok: [testbed-manager] 2025-09-27 21:46:41.206616 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:46:41.206622 | orchestrator | 2025-09-27 21:46:41.206628 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-27 21:46:41.206639 | orchestrator | Saturday 27 September 2025 21:46:33 +0000 (0:00:01.370) 0:04:48.628 **** 2025-09-27 21:46:41.206646 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:46:41.206652 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:46:41.206658 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:46:41.206664 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:46:41.206670 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:46:41.206676 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:46:41.206682 | orchestrator | changed: [testbed-manager] 2025-09-27 21:46:41.206688 | orchestrator | 2025-09-27 21:46:41.206694 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-27 21:46:41.206701 | orchestrator | Saturday 27 September 2025 21:46:38 +0000 (0:00:05.340) 0:04:53.968 **** 2025-09-27 21:46:41.206707 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:46:41.206715 | orchestrator | 2025-09-27 21:46:41.206721 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-27 21:46:41.206727 | orchestrator | Saturday 27 September 2025 21:46:39 +0000 (0:00:00.517) 0:04:54.486 **** 2025-09-27 21:46:41.206733 | orchestrator | changed: [testbed-manager] 2025-09-27 21:46:41.206739 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:46:41.206745 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:46:41.206751 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:46:41.206758 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:46:41.206764 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:46:41.206770 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:46:41.206776 | orchestrator | 2025-09-27 21:46:41.206782 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-27 21:46:41.206788 | orchestrator | Saturday 27 September 2025 21:46:39 +0000 (0:00:00.708) 0:04:55.194 **** 2025-09-27 21:46:41.206794 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:46:41.206800 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:46:41.206807 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:46:41.206813 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:46:41.206822 | orchestrator | ok: [testbed-manager] 2025-09-27 21:46:54.545587 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:46:54.545732 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:46:54.545795 | orchestrator | 2025-09-27 21:46:54.545819 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-27 21:46:54.545841 | orchestrator | Saturday 27 September 2025 21:46:41 +0000 (0:00:01.377) 0:04:56.572 **** 2025-09-27 21:46:54.545941 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:46:54.545964 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:46:54.545985 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:46:54.546006 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:46:54.546157 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:46:54.546183 | orchestrator | changed: [testbed-manager] 2025-09-27 21:46:54.546204 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:46:54.546223 | orchestrator | 2025-09-27 21:46:54.546244 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-27 21:46:54.546265 | orchestrator | Saturday 27 September 2025 21:46:41 +0000 (0:00:00.699) 0:04:57.272 **** 2025-09-27 21:46:54.546285 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:46:54.546306 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:46:54.546325 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:46:54.546345 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:46:54.546364 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:46:54.546384 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:46:54.546404 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:46:54.546458 | orchestrator | 2025-09-27 21:46:54.546480 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-27 21:46:54.546516 | orchestrator | Saturday 27 September 2025 21:46:42 +0000 (0:00:00.303) 0:04:57.576 **** 2025-09-27 21:46:54.546537 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:46:54.546555 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:46:54.546574 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:46:54.546592 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:46:54.546611 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:46:54.546631 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:46:54.546652 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:46:54.546672 | orchestrator | 2025-09-27 21:46:54.546693 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-27 21:46:54.546715 | orchestrator | Saturday 27 September 2025 21:46:42 +0000 (0:00:00.375) 0:04:57.952 **** 2025-09-27 21:46:54.546735 | orchestrator | ok: [testbed-manager] 2025-09-27 21:46:54.546756 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:46:54.546775 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:46:54.546794 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:46:54.546813 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:46:54.546833 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:46:54.546852 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:46:54.546894 | orchestrator | 2025-09-27 21:46:54.546914 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-27 21:46:54.546934 | orchestrator | Saturday 27 September 2025 21:46:42 +0000 (0:00:00.297) 0:04:58.250 **** 2025-09-27 21:46:54.546953 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:46:54.546972 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:46:54.546992 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:46:54.547011 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:46:54.547030 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:46:54.547050 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:46:54.547070 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:46:54.547089 | orchestrator | 2025-09-27 21:46:54.547109 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-27 21:46:54.547131 | orchestrator | Saturday 27 September 2025 21:46:43 +0000 (0:00:00.258) 0:04:58.508 **** 2025-09-27 21:46:54.547150 | orchestrator | ok: [testbed-manager] 2025-09-27 21:46:54.547170 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:46:54.547188 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:46:54.547208 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:46:54.547226 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:46:54.547244 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:46:54.547263 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:46:54.547282 | orchestrator | 2025-09-27 21:46:54.547301 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-27 21:46:54.547320 | orchestrator | Saturday 27 September 2025 21:46:43 +0000 (0:00:00.303) 0:04:58.811 **** 2025-09-27 21:46:54.547339 | orchestrator | ok: [testbed-manager] =>  2025-09-27 21:46:54.547358 | orchestrator |  docker_version: 5:27.5.1 2025-09-27 21:46:54.547376 | orchestrator | ok: [testbed-node-3] =>  2025-09-27 21:46:54.547395 | orchestrator |  docker_version: 5:27.5.1 2025-09-27 21:46:54.547414 | orchestrator | ok: [testbed-node-4] =>  2025-09-27 21:46:54.547433 | orchestrator |  docker_version: 5:27.5.1 2025-09-27 21:46:54.547451 | orchestrator | ok: [testbed-node-5] =>  2025-09-27 21:46:54.547469 | orchestrator |  docker_version: 5:27.5.1 2025-09-27 21:46:54.547486 | orchestrator | ok: [testbed-node-0] =>  2025-09-27 21:46:54.547504 | orchestrator |  docker_version: 5:27.5.1 2025-09-27 21:46:54.547523 | orchestrator | ok: [testbed-node-1] =>  2025-09-27 21:46:54.547542 | orchestrator |  docker_version: 5:27.5.1 2025-09-27 21:46:54.547560 | orchestrator | ok: [testbed-node-2] =>  2025-09-27 21:46:54.547579 | orchestrator |  docker_version: 5:27.5.1 2025-09-27 21:46:54.547598 | orchestrator | 2025-09-27 21:46:54.547617 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-27 21:46:54.547653 | orchestrator | Saturday 27 September 2025 21:46:43 +0000 (0:00:00.268) 0:04:59.080 **** 2025-09-27 21:46:54.547673 | orchestrator | ok: [testbed-manager] =>  2025-09-27 21:46:54.547691 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-27 21:46:54.547710 | orchestrator | ok: [testbed-node-3] =>  2025-09-27 21:46:54.547729 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-27 21:46:54.547748 | orchestrator | ok: [testbed-node-4] =>  2025-09-27 21:46:54.547766 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-27 21:46:54.547785 | orchestrator | ok: [testbed-node-5] =>  2025-09-27 21:46:54.547804 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-27 21:46:54.547822 | orchestrator | ok: [testbed-node-0] =>  2025-09-27 21:46:54.547841 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-27 21:46:54.547880 | orchestrator | ok: [testbed-node-1] =>  2025-09-27 21:46:54.547899 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-27 21:46:54.547918 | orchestrator | ok: [testbed-node-2] =>  2025-09-27 21:46:54.547936 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-27 21:46:54.547954 | orchestrator | 2025-09-27 21:46:54.547972 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-27 21:46:54.548018 | orchestrator | Saturday 27 September 2025 21:46:43 +0000 (0:00:00.285) 0:04:59.366 **** 2025-09-27 21:46:54.548038 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:46:54.548057 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:46:54.548075 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:46:54.548094 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:46:54.548113 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:46:54.548131 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:46:54.548150 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:46:54.548169 | orchestrator | 2025-09-27 21:46:54.548188 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-27 21:46:54.548207 | orchestrator | Saturday 27 September 2025 21:46:44 +0000 (0:00:00.251) 0:04:59.617 **** 2025-09-27 21:46:54.548227 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:46:54.548245 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:46:54.548264 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:46:54.548283 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:46:54.548302 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:46:54.548321 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:46:54.548340 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:46:54.548358 | orchestrator | 2025-09-27 21:46:54.548377 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-27 21:46:54.548397 | orchestrator | Saturday 27 September 2025 21:46:44 +0000 (0:00:00.282) 0:04:59.899 **** 2025-09-27 21:46:54.548428 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:46:54.548450 | orchestrator | 2025-09-27 21:46:54.548469 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-27 21:46:54.548487 | orchestrator | Saturday 27 September 2025 21:46:44 +0000 (0:00:00.404) 0:05:00.304 **** 2025-09-27 21:46:54.548506 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:46:54.548524 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:46:54.548543 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:46:54.548561 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:46:54.548581 | orchestrator | ok: [testbed-manager] 2025-09-27 21:46:54.548600 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:46:54.548618 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:46:54.548636 | orchestrator | 2025-09-27 21:46:54.548655 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-27 21:46:54.548674 | orchestrator | Saturday 27 September 2025 21:46:45 +0000 (0:00:00.744) 0:05:01.049 **** 2025-09-27 21:46:54.548692 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:46:54.548724 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:46:54.548743 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:46:54.548762 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:46:54.548782 | orchestrator | ok: [testbed-manager] 2025-09-27 21:46:54.548801 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:46:54.548820 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:46:54.548839 | orchestrator | 2025-09-27 21:46:54.548929 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-27 21:46:54.548954 | orchestrator | Saturday 27 September 2025 21:46:48 +0000 (0:00:02.886) 0:05:03.936 **** 2025-09-27 21:46:54.548973 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-27 21:46:54.548993 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-27 21:46:54.549013 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-27 21:46:54.549031 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-27 21:46:54.549051 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-27 21:46:54.549069 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-27 21:46:54.549088 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:46:54.549107 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-27 21:46:54.549126 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-27 21:46:54.549145 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-27 21:46:54.549164 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:46:54.549183 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-27 21:46:54.549202 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-27 21:46:54.549221 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-27 21:46:54.549240 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:46:54.549258 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-27 21:46:54.549278 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-27 21:46:54.549295 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-27 21:46:54.549312 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:46:54.549329 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-27 21:46:54.549346 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-27 21:46:54.549363 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-27 21:46:54.549379 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:46:54.549396 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:46:54.549413 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-27 21:46:54.549430 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-27 21:46:54.549447 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-27 21:46:54.549464 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:46:54.549480 | orchestrator | 2025-09-27 21:46:54.549495 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-27 21:46:54.549511 | orchestrator | Saturday 27 September 2025 21:46:49 +0000 (0:00:00.588) 0:05:04.524 **** 2025-09-27 21:46:54.549528 | orchestrator | ok: [testbed-manager] 2025-09-27 21:46:54.549545 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:46:54.549562 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:46:54.549579 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:46:54.549595 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:46:54.549612 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:46:54.549629 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:46:54.549645 | orchestrator | 2025-09-27 21:46:54.549676 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-27 21:47:46.029401 | orchestrator | Saturday 27 September 2025 21:46:54 +0000 (0:00:05.380) 0:05:09.905 **** 2025-09-27 21:47:46.029533 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:47:46.029551 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:47:46.029562 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:47:46.029602 | orchestrator | ok: [testbed-manager] 2025-09-27 21:47:46.029615 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:47:46.029625 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:47:46.029636 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:47:46.029647 | orchestrator | 2025-09-27 21:47:46.029658 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-27 21:47:46.029670 | orchestrator | Saturday 27 September 2025 21:46:55 +0000 (0:00:01.172) 0:05:11.077 **** 2025-09-27 21:47:46.029680 | orchestrator | ok: [testbed-manager] 2025-09-27 21:47:46.029691 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:47:46.029702 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:47:46.029712 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:47:46.029724 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:47:46.029735 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:47:46.029746 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:47:46.029756 | orchestrator | 2025-09-27 21:47:46.029767 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-27 21:47:46.029778 | orchestrator | Saturday 27 September 2025 21:47:03 +0000 (0:00:07.384) 0:05:18.461 **** 2025-09-27 21:47:46.029788 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:47:46.029814 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:47:46.029884 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:47:46.029895 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:47:46.029906 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:47:46.029916 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:47:46.029927 | orchestrator | changed: [testbed-manager] 2025-09-27 21:47:46.029939 | orchestrator | 2025-09-27 21:47:46.029951 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-27 21:47:46.029963 | orchestrator | Saturday 27 September 2025 21:47:08 +0000 (0:00:05.361) 0:05:23.823 **** 2025-09-27 21:47:46.029976 | orchestrator | ok: [testbed-manager] 2025-09-27 21:47:46.029988 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:47:46.030000 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:47:46.030012 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:47:46.030091 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:47:46.030103 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:47:46.030115 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:47:46.030127 | orchestrator | 2025-09-27 21:47:46.030140 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-27 21:47:46.030152 | orchestrator | Saturday 27 September 2025 21:47:09 +0000 (0:00:01.166) 0:05:24.989 **** 2025-09-27 21:47:46.030165 | orchestrator | ok: [testbed-manager] 2025-09-27 21:47:46.030178 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:47:46.030191 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:47:46.030203 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:47:46.030215 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:47:46.030228 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:47:46.030240 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:47:46.030252 | orchestrator | 2025-09-27 21:47:46.030265 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-27 21:47:46.030277 | orchestrator | Saturday 27 September 2025 21:47:10 +0000 (0:00:01.242) 0:05:26.232 **** 2025-09-27 21:47:46.030288 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:47:46.030299 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:47:46.030309 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:47:46.030320 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:47:46.030331 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:47:46.030341 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:47:46.030352 | orchestrator | changed: [testbed-manager] 2025-09-27 21:47:46.030362 | orchestrator | 2025-09-27 21:47:46.030373 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-27 21:47:46.030384 | orchestrator | Saturday 27 September 2025 21:47:11 +0000 (0:00:00.508) 0:05:26.741 **** 2025-09-27 21:47:46.030405 | orchestrator | ok: [testbed-manager] 2025-09-27 21:47:46.030416 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:47:46.030427 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:47:46.030437 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:47:46.030448 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:47:46.030458 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:47:46.030475 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:47:46.030492 | orchestrator | 2025-09-27 21:47:46.030503 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-27 21:47:46.030514 | orchestrator | Saturday 27 September 2025 21:47:19 +0000 (0:00:08.150) 0:05:34.892 **** 2025-09-27 21:47:46.030525 | orchestrator | changed: [testbed-manager] 2025-09-27 21:47:46.030536 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:47:46.030546 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:47:46.030557 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:47:46.030567 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:47:46.030578 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:47:46.030588 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:47:46.030598 | orchestrator | 2025-09-27 21:47:46.030609 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-27 21:47:46.030620 | orchestrator | Saturday 27 September 2025 21:47:20 +0000 (0:00:00.883) 0:05:35.775 **** 2025-09-27 21:47:46.030630 | orchestrator | ok: [testbed-manager] 2025-09-27 21:47:46.030640 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:47:46.030651 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:47:46.030661 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:47:46.030672 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:47:46.030682 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:47:46.030693 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:47:46.030703 | orchestrator | 2025-09-27 21:47:46.030714 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-27 21:47:46.030724 | orchestrator | Saturday 27 September 2025 21:47:28 +0000 (0:00:08.083) 0:05:43.858 **** 2025-09-27 21:47:46.030735 | orchestrator | ok: [testbed-manager] 2025-09-27 21:47:46.030745 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:47:46.030756 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:47:46.030766 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:47:46.030777 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:47:46.030787 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:47:46.030835 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:47:46.030848 | orchestrator | 2025-09-27 21:47:46.030859 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-27 21:47:46.030870 | orchestrator | Saturday 27 September 2025 21:47:39 +0000 (0:00:10.595) 0:05:54.453 **** 2025-09-27 21:47:46.030881 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-27 21:47:46.030893 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-27 21:47:46.030903 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-27 21:47:46.030914 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-27 21:47:46.030925 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-27 21:47:46.030935 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-27 21:47:46.030946 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-27 21:47:46.030956 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-27 21:47:46.030967 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-27 21:47:46.030977 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-27 21:47:46.030988 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-27 21:47:46.030998 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-27 21:47:46.031009 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-27 21:47:46.031020 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-27 21:47:46.031030 | orchestrator | 2025-09-27 21:47:46.031041 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-27 21:47:46.031059 | orchestrator | Saturday 27 September 2025 21:47:40 +0000 (0:00:01.185) 0:05:55.639 **** 2025-09-27 21:47:46.031070 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:47:46.031080 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:47:46.031091 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:47:46.031102 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:47:46.031112 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:47:46.031122 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:47:46.031133 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:47:46.031144 | orchestrator | 2025-09-27 21:47:46.031154 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-27 21:47:46.031165 | orchestrator | Saturday 27 September 2025 21:47:40 +0000 (0:00:00.506) 0:05:56.145 **** 2025-09-27 21:47:46.031176 | orchestrator | ok: [testbed-manager] 2025-09-27 21:47:46.031186 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:47:46.031200 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:47:46.031217 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:47:46.031229 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:47:46.031239 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:47:46.031250 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:47:46.031260 | orchestrator | 2025-09-27 21:47:46.031271 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-27 21:47:46.031282 | orchestrator | Saturday 27 September 2025 21:47:44 +0000 (0:00:03.556) 0:05:59.702 **** 2025-09-27 21:47:46.031293 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:47:46.031304 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:47:46.031314 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:47:46.031324 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:47:46.031335 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:47:46.031345 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:47:46.031356 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:47:46.031366 | orchestrator | 2025-09-27 21:47:46.031377 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-27 21:47:46.031389 | orchestrator | Saturday 27 September 2025 21:47:44 +0000 (0:00:00.511) 0:06:00.213 **** 2025-09-27 21:47:46.031399 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-27 21:47:46.031410 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-27 21:47:46.031421 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:47:46.031432 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-27 21:47:46.031442 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-27 21:47:46.031453 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:47:46.031463 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-27 21:47:46.031474 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-27 21:47:46.031484 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:47:46.031495 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-27 21:47:46.031506 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-27 21:47:46.031516 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:47:46.031527 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-27 21:47:46.031537 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-27 21:47:46.031548 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:47:46.031558 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-27 21:47:46.031569 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-27 21:47:46.031579 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:47:46.031590 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-27 21:47:46.031601 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-27 21:47:46.031618 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:47:46.031629 | orchestrator | 2025-09-27 21:47:46.031640 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-27 21:47:46.031650 | orchestrator | Saturday 27 September 2025 21:47:45 +0000 (0:00:00.683) 0:06:00.896 **** 2025-09-27 21:47:46.031661 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:47:46.031671 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:47:46.031682 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:47:46.031692 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:47:46.031703 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:47:46.031714 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:47:46.031724 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:47:46.031735 | orchestrator | 2025-09-27 21:47:46.031753 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-27 21:48:06.562645 | orchestrator | Saturday 27 September 2025 21:47:46 +0000 (0:00:00.501) 0:06:01.397 **** 2025-09-27 21:48:06.562768 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:48:06.562791 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:48:06.562852 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:48:06.562867 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:48:06.562878 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:48:06.562889 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:48:06.562900 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:48:06.562911 | orchestrator | 2025-09-27 21:48:06.562923 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-27 21:48:06.562934 | orchestrator | Saturday 27 September 2025 21:47:46 +0000 (0:00:00.487) 0:06:01.885 **** 2025-09-27 21:48:06.562945 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:48:06.562956 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:48:06.562966 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:48:06.562977 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:48:06.562996 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:48:06.563014 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:48:06.563031 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:48:06.563048 | orchestrator | 2025-09-27 21:48:06.563123 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-27 21:48:06.563144 | orchestrator | Saturday 27 September 2025 21:47:47 +0000 (0:00:00.516) 0:06:02.402 **** 2025-09-27 21:48:06.563156 | orchestrator | ok: [testbed-manager] 2025-09-27 21:48:06.563167 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:48:06.563180 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:48:06.563193 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:48:06.563205 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:48:06.563217 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:48:06.563229 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:48:06.563241 | orchestrator | 2025-09-27 21:48:06.563253 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-27 21:48:06.563266 | orchestrator | Saturday 27 September 2025 21:47:48 +0000 (0:00:01.667) 0:06:04.069 **** 2025-09-27 21:48:06.563279 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:48:06.563294 | orchestrator | 2025-09-27 21:48:06.563306 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-27 21:48:06.563319 | orchestrator | Saturday 27 September 2025 21:47:49 +0000 (0:00:01.036) 0:06:05.106 **** 2025-09-27 21:48:06.563331 | orchestrator | ok: [testbed-manager] 2025-09-27 21:48:06.563343 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:48:06.563355 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:48:06.563368 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:48:06.563380 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:48:06.563392 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:48:06.563426 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:48:06.563438 | orchestrator | 2025-09-27 21:48:06.563451 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-27 21:48:06.563463 | orchestrator | Saturday 27 September 2025 21:47:50 +0000 (0:00:00.840) 0:06:05.946 **** 2025-09-27 21:48:06.563475 | orchestrator | ok: [testbed-manager] 2025-09-27 21:48:06.563487 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:48:06.563499 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:48:06.563511 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:48:06.563524 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:48:06.563535 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:48:06.563545 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:48:06.563555 | orchestrator | 2025-09-27 21:48:06.563566 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-27 21:48:06.563577 | orchestrator | Saturday 27 September 2025 21:47:51 +0000 (0:00:00.832) 0:06:06.779 **** 2025-09-27 21:48:06.563588 | orchestrator | ok: [testbed-manager] 2025-09-27 21:48:06.563598 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:48:06.563609 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:48:06.563619 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:48:06.563630 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:48:06.563640 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:48:06.563651 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:48:06.563661 | orchestrator | 2025-09-27 21:48:06.563672 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-27 21:48:06.563683 | orchestrator | Saturday 27 September 2025 21:47:52 +0000 (0:00:01.484) 0:06:08.264 **** 2025-09-27 21:48:06.563693 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:48:06.563704 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:48:06.563714 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:48:06.563725 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:48:06.563735 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:48:06.563747 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:48:06.563766 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:48:06.563784 | orchestrator | 2025-09-27 21:48:06.563801 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-27 21:48:06.563845 | orchestrator | Saturday 27 September 2025 21:47:54 +0000 (0:00:01.348) 0:06:09.612 **** 2025-09-27 21:48:06.563863 | orchestrator | ok: [testbed-manager] 2025-09-27 21:48:06.563881 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:48:06.563899 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:48:06.563918 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:48:06.563939 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:48:06.563957 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:48:06.563968 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:48:06.563979 | orchestrator | 2025-09-27 21:48:06.563989 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-27 21:48:06.564000 | orchestrator | Saturday 27 September 2025 21:47:55 +0000 (0:00:01.258) 0:06:10.871 **** 2025-09-27 21:48:06.564010 | orchestrator | changed: [testbed-manager] 2025-09-27 21:48:06.564021 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:48:06.564032 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:48:06.564042 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:48:06.564052 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:48:06.564063 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:48:06.564073 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:48:06.564083 | orchestrator | 2025-09-27 21:48:06.564115 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-27 21:48:06.564126 | orchestrator | Saturday 27 September 2025 21:47:56 +0000 (0:00:01.371) 0:06:12.242 **** 2025-09-27 21:48:06.564137 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:48:06.564158 | orchestrator | 2025-09-27 21:48:06.564169 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-27 21:48:06.564180 | orchestrator | Saturday 27 September 2025 21:47:57 +0000 (0:00:01.040) 0:06:13.283 **** 2025-09-27 21:48:06.564190 | orchestrator | ok: [testbed-manager] 2025-09-27 21:48:06.564201 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:48:06.564212 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:48:06.564222 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:48:06.564233 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:48:06.564243 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:48:06.564254 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:48:06.564264 | orchestrator | 2025-09-27 21:48:06.564275 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-27 21:48:06.564286 | orchestrator | Saturday 27 September 2025 21:47:59 +0000 (0:00:01.378) 0:06:14.661 **** 2025-09-27 21:48:06.564296 | orchestrator | ok: [testbed-manager] 2025-09-27 21:48:06.564307 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:48:06.564317 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:48:06.564327 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:48:06.564338 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:48:06.564348 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:48:06.564358 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:48:06.564369 | orchestrator | 2025-09-27 21:48:06.564379 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-27 21:48:06.564390 | orchestrator | Saturday 27 September 2025 21:48:00 +0000 (0:00:01.090) 0:06:15.752 **** 2025-09-27 21:48:06.564401 | orchestrator | ok: [testbed-manager] 2025-09-27 21:48:06.564411 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:48:06.564421 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:48:06.564432 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:48:06.564442 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:48:06.564452 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:48:06.564463 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:48:06.564473 | orchestrator | 2025-09-27 21:48:06.564484 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-27 21:48:06.564494 | orchestrator | Saturday 27 September 2025 21:48:02 +0000 (0:00:01.859) 0:06:17.611 **** 2025-09-27 21:48:06.564505 | orchestrator | ok: [testbed-manager] 2025-09-27 21:48:06.564516 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:48:06.564526 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:48:06.564536 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:48:06.564546 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:48:06.564557 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:48:06.564568 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:48:06.564578 | orchestrator | 2025-09-27 21:48:06.564589 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-27 21:48:06.564599 | orchestrator | Saturday 27 September 2025 21:48:03 +0000 (0:00:00.992) 0:06:18.604 **** 2025-09-27 21:48:06.564610 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:48:06.564621 | orchestrator | 2025-09-27 21:48:06.564632 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-27 21:48:06.564642 | orchestrator | Saturday 27 September 2025 21:48:04 +0000 (0:00:00.908) 0:06:19.512 **** 2025-09-27 21:48:06.564653 | orchestrator | 2025-09-27 21:48:06.564663 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-27 21:48:06.564674 | orchestrator | Saturday 27 September 2025 21:48:04 +0000 (0:00:00.035) 0:06:19.548 **** 2025-09-27 21:48:06.564685 | orchestrator | 2025-09-27 21:48:06.564695 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-27 21:48:06.564706 | orchestrator | Saturday 27 September 2025 21:48:04 +0000 (0:00:00.039) 0:06:19.587 **** 2025-09-27 21:48:06.564716 | orchestrator | 2025-09-27 21:48:06.564727 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-27 21:48:06.564750 | orchestrator | Saturday 27 September 2025 21:48:04 +0000 (0:00:00.035) 0:06:19.623 **** 2025-09-27 21:48:06.564761 | orchestrator | 2025-09-27 21:48:06.564771 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-27 21:48:06.564782 | orchestrator | Saturday 27 September 2025 21:48:04 +0000 (0:00:00.035) 0:06:19.658 **** 2025-09-27 21:48:06.564793 | orchestrator | 2025-09-27 21:48:06.564824 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-27 21:48:06.564841 | orchestrator | Saturday 27 September 2025 21:48:04 +0000 (0:00:00.039) 0:06:19.698 **** 2025-09-27 21:48:06.564852 | orchestrator | 2025-09-27 21:48:06.564863 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-27 21:48:06.564874 | orchestrator | Saturday 27 September 2025 21:48:04 +0000 (0:00:00.035) 0:06:19.733 **** 2025-09-27 21:48:06.564884 | orchestrator | 2025-09-27 21:48:06.564895 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-27 21:48:06.564906 | orchestrator | Saturday 27 September 2025 21:48:04 +0000 (0:00:00.036) 0:06:19.769 **** 2025-09-27 21:48:06.564916 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:48:06.564932 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:48:06.564950 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:48:06.564968 | orchestrator | 2025-09-27 21:48:06.564985 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-27 21:48:06.565003 | orchestrator | Saturday 27 September 2025 21:48:05 +0000 (0:00:00.938) 0:06:20.708 **** 2025-09-27 21:48:06.565021 | orchestrator | changed: [testbed-manager] 2025-09-27 21:48:06.565042 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:48:06.565060 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:48:06.565076 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:48:06.565087 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:48:06.565105 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:48:32.400970 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:48:32.401068 | orchestrator | 2025-09-27 21:48:32.401084 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-27 21:48:32.401097 | orchestrator | Saturday 27 September 2025 21:48:06 +0000 (0:00:01.218) 0:06:21.927 **** 2025-09-27 21:48:32.401108 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:48:32.401119 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:48:32.401129 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:48:32.401140 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:48:32.401151 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:48:32.401162 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:48:32.401172 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:48:32.401183 | orchestrator | 2025-09-27 21:48:32.401194 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-27 21:48:32.401204 | orchestrator | Saturday 27 September 2025 21:48:09 +0000 (0:00:03.243) 0:06:25.170 **** 2025-09-27 21:48:32.401215 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:48:32.401225 | orchestrator | 2025-09-27 21:48:32.401236 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-27 21:48:32.401247 | orchestrator | Saturday 27 September 2025 21:48:09 +0000 (0:00:00.095) 0:06:25.266 **** 2025-09-27 21:48:32.401258 | orchestrator | ok: [testbed-manager] 2025-09-27 21:48:32.401269 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:48:32.401293 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:48:32.401304 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:48:32.401315 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:48:32.401326 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:48:32.401336 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:48:32.401346 | orchestrator | 2025-09-27 21:48:32.401358 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-27 21:48:32.401369 | orchestrator | Saturday 27 September 2025 21:48:10 +0000 (0:00:00.894) 0:06:26.161 **** 2025-09-27 21:48:32.401380 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:48:32.401410 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:48:32.401421 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:48:32.401432 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:48:32.401443 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:48:32.401453 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:48:32.401464 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:48:32.401474 | orchestrator | 2025-09-27 21:48:32.401485 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-27 21:48:32.401495 | orchestrator | Saturday 27 September 2025 21:48:11 +0000 (0:00:00.437) 0:06:26.598 **** 2025-09-27 21:48:32.401507 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:48:32.401520 | orchestrator | 2025-09-27 21:48:32.401532 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-27 21:48:32.401546 | orchestrator | Saturday 27 September 2025 21:48:12 +0000 (0:00:00.907) 0:06:27.505 **** 2025-09-27 21:48:32.401557 | orchestrator | ok: [testbed-manager] 2025-09-27 21:48:32.401570 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:48:32.401582 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:48:32.401595 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:48:32.401607 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:48:32.401620 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:48:32.401631 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:48:32.401643 | orchestrator | 2025-09-27 21:48:32.401655 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-27 21:48:32.401667 | orchestrator | Saturday 27 September 2025 21:48:12 +0000 (0:00:00.774) 0:06:28.279 **** 2025-09-27 21:48:32.401679 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-27 21:48:32.401691 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-27 21:48:32.401703 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-27 21:48:32.401715 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-27 21:48:32.401726 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-27 21:48:32.401739 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-27 21:48:32.401751 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-27 21:48:32.401763 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-27 21:48:32.401775 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-27 21:48:32.401807 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-27 21:48:32.401819 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-27 21:48:32.401831 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-27 21:48:32.401844 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-27 21:48:32.401856 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-27 21:48:32.401868 | orchestrator | 2025-09-27 21:48:32.401879 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-27 21:48:32.401890 | orchestrator | Saturday 27 September 2025 21:48:15 +0000 (0:00:02.315) 0:06:30.595 **** 2025-09-27 21:48:32.401900 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:48:32.401911 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:48:32.401921 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:48:32.401931 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:48:32.401942 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:48:32.401952 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:48:32.401963 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:48:32.401973 | orchestrator | 2025-09-27 21:48:32.401984 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-27 21:48:32.401995 | orchestrator | Saturday 27 September 2025 21:48:15 +0000 (0:00:00.418) 0:06:31.014 **** 2025-09-27 21:48:32.402083 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:48:32.402099 | orchestrator | 2025-09-27 21:48:32.402110 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-27 21:48:32.402120 | orchestrator | Saturday 27 September 2025 21:48:16 +0000 (0:00:00.792) 0:06:31.806 **** 2025-09-27 21:48:32.402131 | orchestrator | ok: [testbed-manager] 2025-09-27 21:48:32.402141 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:48:32.402152 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:48:32.402162 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:48:32.402173 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:48:32.402184 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:48:32.402194 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:48:32.402205 | orchestrator | 2025-09-27 21:48:32.402215 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-27 21:48:32.402226 | orchestrator | Saturday 27 September 2025 21:48:17 +0000 (0:00:00.746) 0:06:32.552 **** 2025-09-27 21:48:32.402236 | orchestrator | ok: [testbed-manager] 2025-09-27 21:48:32.402247 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:48:32.402257 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:48:32.402268 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:48:32.402278 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:48:32.402288 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:48:32.402299 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:48:32.402309 | orchestrator | 2025-09-27 21:48:32.402326 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-27 21:48:32.402337 | orchestrator | Saturday 27 September 2025 21:48:17 +0000 (0:00:00.747) 0:06:33.300 **** 2025-09-27 21:48:32.402347 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:48:32.402358 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:48:32.402368 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:48:32.402379 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:48:32.402389 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:48:32.402400 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:48:32.402410 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:48:32.402421 | orchestrator | 2025-09-27 21:48:32.402431 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-27 21:48:32.402442 | orchestrator | Saturday 27 September 2025 21:48:18 +0000 (0:00:00.431) 0:06:33.731 **** 2025-09-27 21:48:32.402453 | orchestrator | ok: [testbed-manager] 2025-09-27 21:48:32.402463 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:48:32.402474 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:48:32.402484 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:48:32.402495 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:48:32.402505 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:48:32.402516 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:48:32.402526 | orchestrator | 2025-09-27 21:48:32.402537 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-27 21:48:32.402547 | orchestrator | Saturday 27 September 2025 21:48:19 +0000 (0:00:01.445) 0:06:35.177 **** 2025-09-27 21:48:32.402558 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:48:32.402568 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:48:32.402579 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:48:32.402589 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:48:32.402600 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:48:32.402610 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:48:32.402621 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:48:32.402631 | orchestrator | 2025-09-27 21:48:32.402642 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-27 21:48:32.402653 | orchestrator | Saturday 27 September 2025 21:48:20 +0000 (0:00:00.412) 0:06:35.589 **** 2025-09-27 21:48:32.402663 | orchestrator | ok: [testbed-manager] 2025-09-27 21:48:32.402681 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:48:32.402691 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:48:32.402702 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:48:32.402712 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:48:32.402723 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:48:32.402733 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:48:32.402743 | orchestrator | 2025-09-27 21:48:32.402754 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-27 21:48:32.402765 | orchestrator | Saturday 27 September 2025 21:48:27 +0000 (0:00:06.920) 0:06:42.510 **** 2025-09-27 21:48:32.402775 | orchestrator | ok: [testbed-manager] 2025-09-27 21:48:32.402803 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:48:32.402814 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:48:32.402824 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:48:32.402835 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:48:32.402845 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:48:32.402855 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:48:32.402866 | orchestrator | 2025-09-27 21:48:32.402876 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-27 21:48:32.402887 | orchestrator | Saturday 27 September 2025 21:48:28 +0000 (0:00:01.159) 0:06:43.669 **** 2025-09-27 21:48:32.402898 | orchestrator | ok: [testbed-manager] 2025-09-27 21:48:32.402908 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:48:32.402918 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:48:32.402929 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:48:32.402940 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:48:32.402950 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:48:32.402960 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:48:32.402971 | orchestrator | 2025-09-27 21:48:32.402981 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-27 21:48:32.402992 | orchestrator | Saturday 27 September 2025 21:48:30 +0000 (0:00:01.783) 0:06:45.453 **** 2025-09-27 21:48:32.403003 | orchestrator | ok: [testbed-manager] 2025-09-27 21:48:32.403013 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:48:32.403023 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:48:32.403034 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:48:32.403044 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:48:32.403055 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:48:32.403065 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:48:32.403075 | orchestrator | 2025-09-27 21:48:32.403086 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-27 21:48:32.403097 | orchestrator | Saturday 27 September 2025 21:48:31 +0000 (0:00:01.564) 0:06:47.018 **** 2025-09-27 21:48:32.403107 | orchestrator | ok: [testbed-manager] 2025-09-27 21:48:32.403118 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:48:32.403128 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:48:32.403139 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:48:32.403157 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:49:00.383177 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:49:00.383302 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:49:00.383319 | orchestrator | 2025-09-27 21:49:00.383333 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-27 21:49:00.383346 | orchestrator | Saturday 27 September 2025 21:48:32 +0000 (0:00:00.754) 0:06:47.772 **** 2025-09-27 21:49:00.383357 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:49:00.383369 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:00.383380 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:00.383391 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:00.383402 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:00.383413 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:00.383424 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:00.383435 | orchestrator | 2025-09-27 21:49:00.383447 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-27 21:49:00.383458 | orchestrator | Saturday 27 September 2025 21:48:33 +0000 (0:00:00.809) 0:06:48.582 **** 2025-09-27 21:49:00.383498 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:49:00.383509 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:00.383520 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:00.383531 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:00.383541 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:00.383567 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:00.383578 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:00.383589 | orchestrator | 2025-09-27 21:49:00.383600 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-27 21:49:00.383611 | orchestrator | Saturday 27 September 2025 21:48:33 +0000 (0:00:00.539) 0:06:49.121 **** 2025-09-27 21:49:00.383621 | orchestrator | ok: [testbed-manager] 2025-09-27 21:49:00.383632 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:49:00.383643 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:49:00.383653 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:49:00.383664 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:49:00.383675 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:49:00.383685 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:49:00.383696 | orchestrator | 2025-09-27 21:49:00.383708 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-27 21:49:00.383720 | orchestrator | Saturday 27 September 2025 21:48:34 +0000 (0:00:00.495) 0:06:49.617 **** 2025-09-27 21:49:00.383733 | orchestrator | ok: [testbed-manager] 2025-09-27 21:49:00.383745 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:49:00.383757 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:49:00.383804 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:49:00.383822 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:49:00.383840 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:49:00.383860 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:49:00.383873 | orchestrator | 2025-09-27 21:49:00.383885 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-27 21:49:00.383898 | orchestrator | Saturday 27 September 2025 21:48:34 +0000 (0:00:00.522) 0:06:50.139 **** 2025-09-27 21:49:00.383910 | orchestrator | ok: [testbed-manager] 2025-09-27 21:49:00.383922 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:49:00.383932 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:49:00.383943 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:49:00.383954 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:49:00.383964 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:49:00.383974 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:49:00.383985 | orchestrator | 2025-09-27 21:49:00.383996 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-27 21:49:00.384007 | orchestrator | Saturday 27 September 2025 21:48:35 +0000 (0:00:00.515) 0:06:50.654 **** 2025-09-27 21:49:00.384017 | orchestrator | ok: [testbed-manager] 2025-09-27 21:49:00.384028 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:49:00.384038 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:49:00.384049 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:49:00.384059 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:49:00.384070 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:49:00.384080 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:49:00.384091 | orchestrator | 2025-09-27 21:49:00.384101 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-27 21:49:00.384112 | orchestrator | Saturday 27 September 2025 21:48:40 +0000 (0:00:05.296) 0:06:55.950 **** 2025-09-27 21:49:00.384123 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:49:00.384133 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:00.384144 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:00.384155 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:00.384166 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:00.384177 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:00.384187 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:00.384198 | orchestrator | 2025-09-27 21:49:00.384209 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-27 21:49:00.384230 | orchestrator | Saturday 27 September 2025 21:48:41 +0000 (0:00:00.445) 0:06:56.396 **** 2025-09-27 21:49:00.384242 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:49:00.384255 | orchestrator | 2025-09-27 21:49:00.384266 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-27 21:49:00.384277 | orchestrator | Saturday 27 September 2025 21:48:41 +0000 (0:00:00.781) 0:06:57.178 **** 2025-09-27 21:49:00.384288 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:49:00.384299 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:49:00.384309 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:49:00.384320 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:49:00.384331 | orchestrator | ok: [testbed-manager] 2025-09-27 21:49:00.384341 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:49:00.384352 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:49:00.384363 | orchestrator | 2025-09-27 21:49:00.384374 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-27 21:49:00.384384 | orchestrator | Saturday 27 September 2025 21:48:43 +0000 (0:00:01.739) 0:06:58.917 **** 2025-09-27 21:49:00.384395 | orchestrator | ok: [testbed-manager] 2025-09-27 21:49:00.384406 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:49:00.384417 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:49:00.384427 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:49:00.384438 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:49:00.384448 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:49:00.384459 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:49:00.384470 | orchestrator | 2025-09-27 21:49:00.384506 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-27 21:49:00.384518 | orchestrator | Saturday 27 September 2025 21:48:44 +0000 (0:00:01.061) 0:06:59.979 **** 2025-09-27 21:49:00.384529 | orchestrator | ok: [testbed-manager] 2025-09-27 21:49:00.384540 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:49:00.384550 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:49:00.384561 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:49:00.384572 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:49:00.384582 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:49:00.384593 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:49:00.384603 | orchestrator | 2025-09-27 21:49:00.384615 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-27 21:49:00.384625 | orchestrator | Saturday 27 September 2025 21:48:45 +0000 (0:00:00.806) 0:07:00.786 **** 2025-09-27 21:49:00.384636 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-27 21:49:00.384650 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-27 21:49:00.384661 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-27 21:49:00.384672 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-27 21:49:00.384683 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-27 21:49:00.384703 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-27 21:49:00.384714 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-27 21:49:00.384725 | orchestrator | 2025-09-27 21:49:00.384736 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-27 21:49:00.384747 | orchestrator | Saturday 27 September 2025 21:48:46 +0000 (0:00:01.471) 0:07:02.258 **** 2025-09-27 21:49:00.384797 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:49:00.384810 | orchestrator | 2025-09-27 21:49:00.384821 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-27 21:49:00.384832 | orchestrator | Saturday 27 September 2025 21:48:47 +0000 (0:00:00.789) 0:07:03.047 **** 2025-09-27 21:49:00.384843 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:49:00.384869 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:49:00.384880 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:49:00.384902 | orchestrator | changed: [testbed-manager] 2025-09-27 21:49:00.384922 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:00.384942 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:49:00.384961 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:49:00.384979 | orchestrator | 2025-09-27 21:49:00.385000 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-27 21:49:00.385019 | orchestrator | Saturday 27 September 2025 21:48:55 +0000 (0:00:07.754) 0:07:10.801 **** 2025-09-27 21:49:00.385036 | orchestrator | ok: [testbed-manager] 2025-09-27 21:49:00.385047 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:49:00.385058 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:49:00.385068 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:49:00.385079 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:49:00.385089 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:49:00.385100 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:49:00.385110 | orchestrator | 2025-09-27 21:49:00.385121 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-27 21:49:00.385132 | orchestrator | Saturday 27 September 2025 21:48:57 +0000 (0:00:01.896) 0:07:12.698 **** 2025-09-27 21:49:00.385142 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:49:00.385153 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:49:00.385163 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:49:00.385174 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:49:00.385184 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:49:00.385195 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:49:00.385205 | orchestrator | 2025-09-27 21:49:00.385216 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-27 21:49:00.385226 | orchestrator | Saturday 27 September 2025 21:48:58 +0000 (0:00:01.289) 0:07:13.987 **** 2025-09-27 21:49:00.385237 | orchestrator | changed: [testbed-manager] 2025-09-27 21:49:00.385248 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:49:00.385259 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:49:00.385270 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:49:00.385280 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:00.385291 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:49:00.385301 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:49:00.385312 | orchestrator | 2025-09-27 21:49:00.385323 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-27 21:49:00.385333 | orchestrator | 2025-09-27 21:49:00.385344 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-27 21:49:00.385355 | orchestrator | Saturday 27 September 2025 21:48:59 +0000 (0:00:01.250) 0:07:15.238 **** 2025-09-27 21:49:00.385365 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:49:00.385376 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:00.385387 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:00.385397 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:00.385408 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:00.385419 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:00.385438 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:26.945802 | orchestrator | 2025-09-27 21:49:26.945932 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-27 21:49:26.945990 | orchestrator | 2025-09-27 21:49:26.946010 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-27 21:49:26.946106 | orchestrator | Saturday 27 September 2025 21:49:00 +0000 (0:00:00.515) 0:07:15.753 **** 2025-09-27 21:49:26.946126 | orchestrator | changed: [testbed-manager] 2025-09-27 21:49:26.946147 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:49:26.946165 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:49:26.946183 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:49:26.946199 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:26.946210 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:49:26.946221 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:49:26.946232 | orchestrator | 2025-09-27 21:49:26.946245 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-27 21:49:26.946257 | orchestrator | Saturday 27 September 2025 21:49:01 +0000 (0:00:01.528) 0:07:17.282 **** 2025-09-27 21:49:26.946269 | orchestrator | ok: [testbed-manager] 2025-09-27 21:49:26.946283 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:49:26.946295 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:49:26.946307 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:49:26.946319 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:49:26.946349 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:49:26.946362 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:49:26.946374 | orchestrator | 2025-09-27 21:49:26.946386 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-27 21:49:26.946399 | orchestrator | Saturday 27 September 2025 21:49:03 +0000 (0:00:01.435) 0:07:18.717 **** 2025-09-27 21:49:26.946412 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:49:26.946423 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:26.946436 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:26.946447 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:26.946460 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:26.946472 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:26.946485 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:26.946496 | orchestrator | 2025-09-27 21:49:26.946509 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-09-27 21:49:26.946521 | orchestrator | Saturday 27 September 2025 21:49:03 +0000 (0:00:00.493) 0:07:19.211 **** 2025-09-27 21:49:26.946535 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:49:26.946549 | orchestrator | 2025-09-27 21:49:26.946562 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-27 21:49:26.946574 | orchestrator | Saturday 27 September 2025 21:49:04 +0000 (0:00:00.956) 0:07:20.167 **** 2025-09-27 21:49:26.946588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:49:26.946602 | orchestrator | 2025-09-27 21:49:26.946613 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-27 21:49:26.946624 | orchestrator | Saturday 27 September 2025 21:49:05 +0000 (0:00:00.815) 0:07:20.983 **** 2025-09-27 21:49:26.946634 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:49:26.946644 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:26.946655 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:49:26.946665 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:49:26.946676 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:49:26.946686 | orchestrator | changed: [testbed-manager] 2025-09-27 21:49:26.946697 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:49:26.946708 | orchestrator | 2025-09-27 21:49:26.946718 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-27 21:49:26.946729 | orchestrator | Saturday 27 September 2025 21:49:14 +0000 (0:00:08.784) 0:07:29.768 **** 2025-09-27 21:49:26.946764 | orchestrator | changed: [testbed-manager] 2025-09-27 21:49:26.946788 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:49:26.946799 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:49:26.946809 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:49:26.946820 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:26.946830 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:49:26.946841 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:49:26.946851 | orchestrator | 2025-09-27 21:49:26.946862 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-27 21:49:26.946873 | orchestrator | Saturday 27 September 2025 21:49:15 +0000 (0:00:00.757) 0:07:30.525 **** 2025-09-27 21:49:26.946883 | orchestrator | changed: [testbed-manager] 2025-09-27 21:49:26.946894 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:49:26.946905 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:49:26.946916 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:49:26.946927 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:26.946937 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:49:26.946948 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:49:26.946959 | orchestrator | 2025-09-27 21:49:26.946969 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-27 21:49:26.946980 | orchestrator | Saturday 27 September 2025 21:49:16 +0000 (0:00:01.431) 0:07:31.956 **** 2025-09-27 21:49:26.946991 | orchestrator | changed: [testbed-manager] 2025-09-27 21:49:26.947001 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:49:26.947012 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:49:26.947022 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:49:26.947033 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:26.947043 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:49:26.947053 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:49:26.947064 | orchestrator | 2025-09-27 21:49:26.947074 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-27 21:49:26.947085 | orchestrator | Saturday 27 September 2025 21:49:18 +0000 (0:00:01.694) 0:07:33.651 **** 2025-09-27 21:49:26.947096 | orchestrator | changed: [testbed-manager] 2025-09-27 21:49:26.947107 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:49:26.947117 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:49:26.947128 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:49:26.947159 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:26.947170 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:49:26.947181 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:49:26.947191 | orchestrator | 2025-09-27 21:49:26.947202 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-27 21:49:26.947213 | orchestrator | Saturday 27 September 2025 21:49:19 +0000 (0:00:01.272) 0:07:34.923 **** 2025-09-27 21:49:26.947223 | orchestrator | changed: [testbed-manager] 2025-09-27 21:49:26.947234 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:49:26.947244 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:49:26.947255 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:49:26.947265 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:26.947276 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:49:26.947286 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:49:26.947297 | orchestrator | 2025-09-27 21:49:26.947308 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-27 21:49:26.947319 | orchestrator | 2025-09-27 21:49:26.947329 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-27 21:49:26.947340 | orchestrator | Saturday 27 September 2025 21:49:20 +0000 (0:00:01.355) 0:07:36.279 **** 2025-09-27 21:49:26.947357 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:49:26.947369 | orchestrator | 2025-09-27 21:49:26.947379 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-27 21:49:26.947390 | orchestrator | Saturday 27 September 2025 21:49:21 +0000 (0:00:00.897) 0:07:37.176 **** 2025-09-27 21:49:26.947401 | orchestrator | ok: [testbed-manager] 2025-09-27 21:49:26.947419 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:49:26.947429 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:49:26.947440 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:49:26.947451 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:49:26.947461 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:49:26.947471 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:49:26.947482 | orchestrator | 2025-09-27 21:49:26.947493 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-27 21:49:26.947504 | orchestrator | Saturday 27 September 2025 21:49:22 +0000 (0:00:00.818) 0:07:37.994 **** 2025-09-27 21:49:26.947514 | orchestrator | changed: [testbed-manager] 2025-09-27 21:49:26.947525 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:49:26.947536 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:49:26.947547 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:49:26.947557 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:26.947568 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:49:26.947578 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:49:26.947589 | orchestrator | 2025-09-27 21:49:26.947599 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-27 21:49:26.947610 | orchestrator | Saturday 27 September 2025 21:49:23 +0000 (0:00:01.380) 0:07:39.375 **** 2025-09-27 21:49:26.947621 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:49:26.947632 | orchestrator | 2025-09-27 21:49:26.947643 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-27 21:49:26.947653 | orchestrator | Saturday 27 September 2025 21:49:24 +0000 (0:00:00.849) 0:07:40.224 **** 2025-09-27 21:49:26.947664 | orchestrator | ok: [testbed-manager] 2025-09-27 21:49:26.947675 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:49:26.947685 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:49:26.947696 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:49:26.947707 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:49:26.947717 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:49:26.947728 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:49:26.947754 | orchestrator | 2025-09-27 21:49:26.947766 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-27 21:49:26.947777 | orchestrator | Saturday 27 September 2025 21:49:25 +0000 (0:00:00.816) 0:07:41.041 **** 2025-09-27 21:49:26.947788 | orchestrator | changed: [testbed-manager] 2025-09-27 21:49:26.947798 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:49:26.947809 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:49:26.947820 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:49:26.947830 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:26.947841 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:49:26.947852 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:49:26.947862 | orchestrator | 2025-09-27 21:49:26.947873 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:49:26.947884 | orchestrator | testbed-manager : ok=164  changed=38  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2025-09-27 21:49:26.947896 | orchestrator | testbed-node-0 : ok=173  changed=67  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-27 21:49:26.947907 | orchestrator | testbed-node-1 : ok=173  changed=67  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-27 21:49:26.947918 | orchestrator | testbed-node-2 : ok=173  changed=67  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-27 21:49:26.947929 | orchestrator | testbed-node-3 : ok=171  changed=63  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2025-09-27 21:49:26.947940 | orchestrator | testbed-node-4 : ok=171  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-27 21:49:26.947957 | orchestrator | testbed-node-5 : ok=171  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-27 21:49:26.947968 | orchestrator | 2025-09-27 21:49:26.947979 | orchestrator | 2025-09-27 21:49:26.947996 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:49:27.205170 | orchestrator | Saturday 27 September 2025 21:49:26 +0000 (0:00:01.268) 0:07:42.310 **** 2025-09-27 21:49:27.205269 | orchestrator | =============================================================================== 2025-09-27 21:49:27.205283 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.19s 2025-09-27 21:49:27.205295 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.93s 2025-09-27 21:49:27.205306 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 31.89s 2025-09-27 21:49:27.205317 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.38s 2025-09-27 21:49:27.205327 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.54s 2025-09-27 21:49:27.205339 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.42s 2025-09-27 21:49:27.205349 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.60s 2025-09-27 21:49:27.205360 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.79s 2025-09-27 21:49:27.205392 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.20s 2025-09-27 21:49:27.205403 | orchestrator | osism.services.docker : Install containerd package ---------------------- 8.15s 2025-09-27 21:49:27.205414 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.08s 2025-09-27 21:49:27.205425 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.83s 2025-09-27 21:49:27.205435 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 7.75s 2025-09-27 21:49:27.205446 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.38s 2025-09-27 21:49:27.205456 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.34s 2025-09-27 21:49:27.205467 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 6.92s 2025-09-27 21:49:27.205480 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.71s 2025-09-27 21:49:27.205498 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.70s 2025-09-27 21:49:27.205516 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.38s 2025-09-27 21:49:27.205534 | orchestrator | osism.services.docker : Update package cache ---------------------------- 5.36s 2025-09-27 21:49:27.403207 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-27 21:49:27.403304 | orchestrator | + osism apply network 2025-09-27 21:49:39.677519 | orchestrator | 2025-09-27 21:49:39 | INFO  | Task 2956885d-153a-4aa1-88d2-1b80544310c3 (network) was prepared for execution. 2025-09-27 21:49:39.677651 | orchestrator | 2025-09-27 21:49:39 | INFO  | It takes a moment until task 2956885d-153a-4aa1-88d2-1b80544310c3 (network) has been started and output is visible here. 2025-09-27 21:50:07.360172 | orchestrator | 2025-09-27 21:50:07.360295 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-27 21:50:07.360313 | orchestrator | 2025-09-27 21:50:07.360325 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-27 21:50:07.360336 | orchestrator | Saturday 27 September 2025 21:49:43 +0000 (0:00:00.287) 0:00:00.287 **** 2025-09-27 21:50:07.360348 | orchestrator | ok: [testbed-manager] 2025-09-27 21:50:07.360361 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:50:07.360372 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:50:07.360383 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:50:07.360394 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:50:07.360444 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:50:07.360456 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:50:07.360467 | orchestrator | 2025-09-27 21:50:07.360479 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-27 21:50:07.360490 | orchestrator | Saturday 27 September 2025 21:49:44 +0000 (0:00:00.675) 0:00:00.963 **** 2025-09-27 21:50:07.360502 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:50:07.360516 | orchestrator | 2025-09-27 21:50:07.360527 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-27 21:50:07.360538 | orchestrator | Saturday 27 September 2025 21:49:45 +0000 (0:00:01.188) 0:00:02.152 **** 2025-09-27 21:50:07.360549 | orchestrator | ok: [testbed-manager] 2025-09-27 21:50:07.360559 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:50:07.360570 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:50:07.360581 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:50:07.360591 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:50:07.360602 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:50:07.360613 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:50:07.360623 | orchestrator | 2025-09-27 21:50:07.360634 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-27 21:50:07.360645 | orchestrator | Saturday 27 September 2025 21:49:47 +0000 (0:00:02.041) 0:00:04.193 **** 2025-09-27 21:50:07.360656 | orchestrator | ok: [testbed-manager] 2025-09-27 21:50:07.360666 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:50:07.360677 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:50:07.360689 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:50:07.360728 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:50:07.360741 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:50:07.360753 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:50:07.360765 | orchestrator | 2025-09-27 21:50:07.360778 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-27 21:50:07.360790 | orchestrator | Saturday 27 September 2025 21:49:49 +0000 (0:00:01.836) 0:00:06.030 **** 2025-09-27 21:50:07.360803 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-27 21:50:07.360816 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-27 21:50:07.360828 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-27 21:50:07.360841 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-27 21:50:07.360853 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-27 21:50:07.360865 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-27 21:50:07.360879 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-27 21:50:07.360891 | orchestrator | 2025-09-27 21:50:07.360904 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-27 21:50:07.360917 | orchestrator | Saturday 27 September 2025 21:49:50 +0000 (0:00:00.985) 0:00:07.016 **** 2025-09-27 21:50:07.360930 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-27 21:50:07.360943 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-27 21:50:07.360955 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 21:50:07.360968 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-27 21:50:07.360980 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-27 21:50:07.360993 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-27 21:50:07.361005 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-27 21:50:07.361017 | orchestrator | 2025-09-27 21:50:07.361030 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-27 21:50:07.361056 | orchestrator | Saturday 27 September 2025 21:49:53 +0000 (0:00:03.240) 0:00:10.256 **** 2025-09-27 21:50:07.361067 | orchestrator | changed: [testbed-manager] 2025-09-27 21:50:07.361090 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:50:07.361101 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:50:07.361112 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:50:07.361137 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:50:07.361148 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:50:07.361159 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:50:07.361169 | orchestrator | 2025-09-27 21:50:07.361180 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-27 21:50:07.361191 | orchestrator | Saturday 27 September 2025 21:49:55 +0000 (0:00:01.588) 0:00:11.844 **** 2025-09-27 21:50:07.361202 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-27 21:50:07.361212 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 21:50:07.361223 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-27 21:50:07.361234 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-27 21:50:07.361245 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-27 21:50:07.361255 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-27 21:50:07.361266 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-27 21:50:07.361277 | orchestrator | 2025-09-27 21:50:07.361288 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-27 21:50:07.361298 | orchestrator | Saturday 27 September 2025 21:49:57 +0000 (0:00:01.669) 0:00:13.514 **** 2025-09-27 21:50:07.361309 | orchestrator | ok: [testbed-manager] 2025-09-27 21:50:07.361320 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:50:07.361330 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:50:07.361341 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:50:07.361352 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:50:07.361363 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:50:07.361373 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:50:07.361384 | orchestrator | 2025-09-27 21:50:07.361395 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-27 21:50:07.361423 | orchestrator | Saturday 27 September 2025 21:49:58 +0000 (0:00:01.001) 0:00:14.516 **** 2025-09-27 21:50:07.361435 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:50:07.361446 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:07.361456 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:50:07.361467 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:50:07.361478 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:50:07.361488 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:50:07.361499 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:50:07.361510 | orchestrator | 2025-09-27 21:50:07.361521 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-27 21:50:07.361532 | orchestrator | Saturday 27 September 2025 21:49:58 +0000 (0:00:00.638) 0:00:15.155 **** 2025-09-27 21:50:07.361543 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:50:07.361553 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:50:07.361564 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:50:07.361575 | orchestrator | ok: [testbed-manager] 2025-09-27 21:50:07.361585 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:50:07.361596 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:50:07.361607 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:50:07.361617 | orchestrator | 2025-09-27 21:50:07.361628 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-27 21:50:07.361639 | orchestrator | Saturday 27 September 2025 21:50:00 +0000 (0:00:01.875) 0:00:17.030 **** 2025-09-27 21:50:07.361650 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:07.361661 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:50:07.361671 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:50:07.361682 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:50:07.361693 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:50:07.361731 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:50:07.361744 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-27 21:50:07.361756 | orchestrator | 2025-09-27 21:50:07.361767 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-27 21:50:07.361778 | orchestrator | Saturday 27 September 2025 21:50:01 +0000 (0:00:00.851) 0:00:17.881 **** 2025-09-27 21:50:07.361796 | orchestrator | ok: [testbed-manager] 2025-09-27 21:50:07.361807 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:50:07.361818 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:50:07.361828 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:50:07.361839 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:50:07.361850 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:50:07.361860 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:50:07.361871 | orchestrator | 2025-09-27 21:50:07.361882 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-27 21:50:07.361892 | orchestrator | Saturday 27 September 2025 21:50:03 +0000 (0:00:01.657) 0:00:19.538 **** 2025-09-27 21:50:07.361904 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:50:07.361917 | orchestrator | 2025-09-27 21:50:07.361928 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-27 21:50:07.361939 | orchestrator | Saturday 27 September 2025 21:50:04 +0000 (0:00:01.258) 0:00:20.797 **** 2025-09-27 21:50:07.361950 | orchestrator | ok: [testbed-manager] 2025-09-27 21:50:07.361961 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:50:07.361971 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:50:07.361982 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:50:07.361993 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:50:07.362003 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:50:07.362014 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:50:07.362079 | orchestrator | 2025-09-27 21:50:07.362090 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-27 21:50:07.362101 | orchestrator | Saturday 27 September 2025 21:50:05 +0000 (0:00:00.940) 0:00:21.738 **** 2025-09-27 21:50:07.362112 | orchestrator | ok: [testbed-manager] 2025-09-27 21:50:07.362123 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:50:07.362134 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:50:07.362144 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:50:07.362155 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:50:07.362166 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:50:07.362177 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:50:07.362187 | orchestrator | 2025-09-27 21:50:07.362198 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-27 21:50:07.362209 | orchestrator | Saturday 27 September 2025 21:50:06 +0000 (0:00:00.820) 0:00:22.559 **** 2025-09-27 21:50:07.362220 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-27 21:50:07.362231 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-27 21:50:07.362242 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-27 21:50:07.362253 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-27 21:50:07.362263 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-27 21:50:07.362274 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-27 21:50:07.362285 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-27 21:50:07.362296 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-27 21:50:07.362306 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-27 21:50:07.362317 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-27 21:50:07.362328 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-27 21:50:07.362339 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-27 21:50:07.362349 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-27 21:50:07.362360 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-27 21:50:07.362378 | orchestrator | 2025-09-27 21:50:07.362398 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-27 21:50:22.998656 | orchestrator | Saturday 27 September 2025 21:50:07 +0000 (0:00:01.148) 0:00:23.707 **** 2025-09-27 21:50:22.998787 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:50:22.998805 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:22.998817 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:50:22.998828 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:50:22.998839 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:50:22.998850 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:50:22.998860 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:50:22.998871 | orchestrator | 2025-09-27 21:50:22.998883 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-27 21:50:22.998895 | orchestrator | Saturday 27 September 2025 21:50:07 +0000 (0:00:00.614) 0:00:24.322 **** 2025-09-27 21:50:22.998907 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-1, testbed-node-3, testbed-node-2, testbed-node-4, testbed-node-5 2025-09-27 21:50:22.998921 | orchestrator | 2025-09-27 21:50:22.998932 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-27 21:50:22.998943 | orchestrator | Saturday 27 September 2025 21:50:12 +0000 (0:00:04.600) 0:00:28.922 **** 2025-09-27 21:50:22.998955 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:50:22.998970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:50:22.998981 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:50:22.998992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:50:22.999004 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:50:22.999049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:50:22.999069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:50:22.999080 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:50:22.999091 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:50:22.999125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:50:22.999143 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:50:22.999170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:50:22.999182 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:50:22.999194 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:50:22.999207 | orchestrator | 2025-09-27 21:50:22.999220 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-27 21:50:22.999233 | orchestrator | Saturday 27 September 2025 21:50:17 +0000 (0:00:05.205) 0:00:34.128 **** 2025-09-27 21:50:22.999245 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:50:22.999258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:50:22.999270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:50:22.999282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:50:22.999296 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:50:22.999314 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:50:22.999334 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:50:22.999354 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:50:22.999376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:50:22.999390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:50:22.999402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:50:22.999413 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:50:22.999436 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:50:29.929998 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:50:29.930153 | orchestrator | 2025-09-27 21:50:29.930172 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-27 21:50:29.930185 | orchestrator | Saturday 27 September 2025 21:50:22 +0000 (0:00:05.206) 0:00:39.335 **** 2025-09-27 21:50:29.930197 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:50:29.930209 | orchestrator | 2025-09-27 21:50:29.930220 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-27 21:50:29.930231 | orchestrator | Saturday 27 September 2025 21:50:24 +0000 (0:00:01.118) 0:00:40.453 **** 2025-09-27 21:50:29.930242 | orchestrator | ok: [testbed-manager] 2025-09-27 21:50:29.930254 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:50:29.930265 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:50:29.930275 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:50:29.930286 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:50:29.930296 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:50:29.930306 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:50:29.930317 | orchestrator | 2025-09-27 21:50:29.930328 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-27 21:50:29.930339 | orchestrator | Saturday 27 September 2025 21:50:26 +0000 (0:00:01.954) 0:00:42.408 **** 2025-09-27 21:50:29.930349 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-27 21:50:29.930361 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-27 21:50:29.930371 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-27 21:50:29.930382 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-27 21:50:29.930393 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-27 21:50:29.930403 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-27 21:50:29.930414 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-27 21:50:29.930445 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-27 21:50:29.930456 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:50:29.930467 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-27 21:50:29.930478 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-27 21:50:29.930488 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-27 21:50:29.930499 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-27 21:50:29.930509 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:29.930520 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-27 21:50:29.930530 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-27 21:50:29.930553 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-27 21:50:29.930566 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-27 21:50:29.930578 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:50:29.930590 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-27 21:50:29.930602 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-27 21:50:29.930614 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-27 21:50:29.930626 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-27 21:50:29.930638 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:50:29.930650 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-27 21:50:29.930662 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-27 21:50:29.930694 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-27 21:50:29.930707 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-27 21:50:29.930718 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:50:29.930730 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:50:29.930743 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-27 21:50:29.930754 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-27 21:50:29.930766 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-27 21:50:29.930779 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-27 21:50:29.930791 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:50:29.930803 | orchestrator | 2025-09-27 21:50:29.930815 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-27 21:50:29.930844 | orchestrator | Saturday 27 September 2025 21:50:28 +0000 (0:00:02.060) 0:00:44.468 **** 2025-09-27 21:50:29.930857 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:50:29.930869 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:29.930881 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:50:29.930893 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:50:29.930904 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:50:29.930915 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:50:29.930925 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:50:29.930936 | orchestrator | 2025-09-27 21:50:29.930946 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-27 21:50:29.930957 | orchestrator | Saturday 27 September 2025 21:50:28 +0000 (0:00:00.644) 0:00:45.113 **** 2025-09-27 21:50:29.930968 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:50:29.930978 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:29.930999 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:50:29.931009 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:50:29.931020 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:50:29.931030 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:50:29.931041 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:50:29.931051 | orchestrator | 2025-09-27 21:50:29.931062 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:50:29.931073 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-27 21:50:29.931086 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:50:29.931097 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:50:29.931108 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:50:29.931118 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:50:29.931129 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:50:29.931140 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:50:29.931150 | orchestrator | 2025-09-27 21:50:29.931162 | orchestrator | 2025-09-27 21:50:29.931173 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:50:29.931184 | orchestrator | Saturday 27 September 2025 21:50:29 +0000 (0:00:00.728) 0:00:45.841 **** 2025-09-27 21:50:29.931195 | orchestrator | =============================================================================== 2025-09-27 21:50:29.931205 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.21s 2025-09-27 21:50:29.931216 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.21s 2025-09-27 21:50:29.931226 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.60s 2025-09-27 21:50:29.931237 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.24s 2025-09-27 21:50:29.931253 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.06s 2025-09-27 21:50:29.931264 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.04s 2025-09-27 21:50:29.931274 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.95s 2025-09-27 21:50:29.931285 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.88s 2025-09-27 21:50:29.931295 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.84s 2025-09-27 21:50:29.931306 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.67s 2025-09-27 21:50:29.931317 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.66s 2025-09-27 21:50:29.931327 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.59s 2025-09-27 21:50:29.931338 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.26s 2025-09-27 21:50:29.931348 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.19s 2025-09-27 21:50:29.931359 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.15s 2025-09-27 21:50:29.931369 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.12s 2025-09-27 21:50:29.931380 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.00s 2025-09-27 21:50:29.931390 | orchestrator | osism.commons.network : Create required directories --------------------- 0.99s 2025-09-27 21:50:29.931407 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.94s 2025-09-27 21:50:29.931418 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.85s 2025-09-27 21:50:30.217044 | orchestrator | + osism apply wireguard 2025-09-27 21:50:42.255722 | orchestrator | 2025-09-27 21:50:42 | INFO  | Task 593cda4b-d4b6-4018-ba2e-d91e2ecd39b6 (wireguard) was prepared for execution. 2025-09-27 21:50:42.255819 | orchestrator | 2025-09-27 21:50:42 | INFO  | It takes a moment until task 593cda4b-d4b6-4018-ba2e-d91e2ecd39b6 (wireguard) has been started and output is visible here. 2025-09-27 21:51:01.476010 | orchestrator | 2025-09-27 21:51:01.476831 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-27 21:51:01.476865 | orchestrator | 2025-09-27 21:51:01.476879 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-27 21:51:01.476893 | orchestrator | Saturday 27 September 2025 21:50:45 +0000 (0:00:00.216) 0:00:00.216 **** 2025-09-27 21:51:01.476907 | orchestrator | ok: [testbed-manager] 2025-09-27 21:51:01.476920 | orchestrator | 2025-09-27 21:51:01.476931 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-27 21:51:01.476943 | orchestrator | Saturday 27 September 2025 21:50:47 +0000 (0:00:01.468) 0:00:01.684 **** 2025-09-27 21:51:01.476953 | orchestrator | changed: [testbed-manager] 2025-09-27 21:51:01.476965 | orchestrator | 2025-09-27 21:51:01.476976 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-27 21:51:01.476987 | orchestrator | Saturday 27 September 2025 21:50:53 +0000 (0:00:06.517) 0:00:08.202 **** 2025-09-27 21:51:01.476998 | orchestrator | changed: [testbed-manager] 2025-09-27 21:51:01.477008 | orchestrator | 2025-09-27 21:51:01.477019 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-27 21:51:01.477030 | orchestrator | Saturday 27 September 2025 21:50:54 +0000 (0:00:00.544) 0:00:08.746 **** 2025-09-27 21:51:01.477041 | orchestrator | changed: [testbed-manager] 2025-09-27 21:51:01.477052 | orchestrator | 2025-09-27 21:51:01.477062 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-27 21:51:01.477073 | orchestrator | Saturday 27 September 2025 21:50:54 +0000 (0:00:00.427) 0:00:09.173 **** 2025-09-27 21:51:01.477084 | orchestrator | ok: [testbed-manager] 2025-09-27 21:51:01.477095 | orchestrator | 2025-09-27 21:51:01.477105 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-27 21:51:01.477116 | orchestrator | Saturday 27 September 2025 21:50:55 +0000 (0:00:00.532) 0:00:09.706 **** 2025-09-27 21:51:01.477127 | orchestrator | ok: [testbed-manager] 2025-09-27 21:51:01.477138 | orchestrator | 2025-09-27 21:51:01.477149 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-27 21:51:01.477160 | orchestrator | Saturday 27 September 2025 21:50:55 +0000 (0:00:00.527) 0:00:10.234 **** 2025-09-27 21:51:01.477171 | orchestrator | ok: [testbed-manager] 2025-09-27 21:51:01.477181 | orchestrator | 2025-09-27 21:51:01.477192 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-27 21:51:01.477203 | orchestrator | Saturday 27 September 2025 21:50:56 +0000 (0:00:00.408) 0:00:10.642 **** 2025-09-27 21:51:01.477213 | orchestrator | changed: [testbed-manager] 2025-09-27 21:51:01.477224 | orchestrator | 2025-09-27 21:51:01.477235 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-27 21:51:01.477246 | orchestrator | Saturday 27 September 2025 21:50:57 +0000 (0:00:01.191) 0:00:11.833 **** 2025-09-27 21:51:01.477257 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-27 21:51:01.477268 | orchestrator | changed: [testbed-manager] 2025-09-27 21:51:01.477278 | orchestrator | 2025-09-27 21:51:01.477289 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-27 21:51:01.477300 | orchestrator | Saturday 27 September 2025 21:50:58 +0000 (0:00:00.916) 0:00:12.750 **** 2025-09-27 21:51:01.477311 | orchestrator | changed: [testbed-manager] 2025-09-27 21:51:01.477350 | orchestrator | 2025-09-27 21:51:01.477361 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-27 21:51:01.477372 | orchestrator | Saturday 27 September 2025 21:51:00 +0000 (0:00:01.706) 0:00:14.456 **** 2025-09-27 21:51:01.477382 | orchestrator | changed: [testbed-manager] 2025-09-27 21:51:01.477393 | orchestrator | 2025-09-27 21:51:01.477403 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:51:01.477428 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:51:01.477441 | orchestrator | 2025-09-27 21:51:01.477451 | orchestrator | 2025-09-27 21:51:01.477462 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:51:01.477473 | orchestrator | Saturday 27 September 2025 21:51:01 +0000 (0:00:00.956) 0:00:15.413 **** 2025-09-27 21:51:01.477483 | orchestrator | =============================================================================== 2025-09-27 21:51:01.477494 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.52s 2025-09-27 21:51:01.477505 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.71s 2025-09-27 21:51:01.477516 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.47s 2025-09-27 21:51:01.477527 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.19s 2025-09-27 21:51:01.477537 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.96s 2025-09-27 21:51:01.477548 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.92s 2025-09-27 21:51:01.477558 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2025-09-27 21:51:01.477569 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2025-09-27 21:51:01.477579 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.53s 2025-09-27 21:51:01.477590 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2025-09-27 21:51:01.477601 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2025-09-27 21:51:01.769106 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-27 21:51:01.804362 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-27 21:51:01.804424 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-27 21:51:01.877038 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 205 0 --:--:-- --:--:-- --:--:-- 208 2025-09-27 21:51:01.890478 | orchestrator | + osism apply --environment custom workarounds 2025-09-27 21:51:03.815782 | orchestrator | 2025-09-27 21:51:03 | INFO  | Trying to run play workarounds in environment custom 2025-09-27 21:51:14.036711 | orchestrator | 2025-09-27 21:51:14 | INFO  | Task 707fec6b-f8b0-4c06-8cce-e8d9b2bad395 (workarounds) was prepared for execution. 2025-09-27 21:51:14.036825 | orchestrator | 2025-09-27 21:51:14 | INFO  | It takes a moment until task 707fec6b-f8b0-4c06-8cce-e8d9b2bad395 (workarounds) has been started and output is visible here. 2025-09-27 21:51:37.825377 | orchestrator | 2025-09-27 21:51:37.825496 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:51:37.825512 | orchestrator | 2025-09-27 21:51:37.825524 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-27 21:51:37.825536 | orchestrator | Saturday 27 September 2025 21:51:17 +0000 (0:00:00.136) 0:00:00.136 **** 2025-09-27 21:51:37.825547 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-27 21:51:37.825559 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-27 21:51:37.825603 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-27 21:51:37.825615 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-27 21:51:37.825645 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-27 21:51:37.825656 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-27 21:51:37.825667 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-27 21:51:37.825677 | orchestrator | 2025-09-27 21:51:37.825688 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-27 21:51:37.825699 | orchestrator | 2025-09-27 21:51:37.825709 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-27 21:51:37.825720 | orchestrator | Saturday 27 September 2025 21:51:18 +0000 (0:00:00.696) 0:00:00.832 **** 2025-09-27 21:51:37.825731 | orchestrator | ok: [testbed-manager] 2025-09-27 21:51:37.825743 | orchestrator | 2025-09-27 21:51:37.825754 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-27 21:51:37.825766 | orchestrator | 2025-09-27 21:51:37.825784 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-27 21:51:37.825802 | orchestrator | Saturday 27 September 2025 21:51:20 +0000 (0:00:02.398) 0:00:03.231 **** 2025-09-27 21:51:37.825821 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:51:37.825839 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:51:37.825857 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:51:37.825874 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:51:37.825891 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:51:37.825908 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:51:37.825926 | orchestrator | 2025-09-27 21:51:37.825943 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-27 21:51:37.825959 | orchestrator | 2025-09-27 21:51:37.825977 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-27 21:51:37.825994 | orchestrator | Saturday 27 September 2025 21:51:22 +0000 (0:00:01.716) 0:00:04.947 **** 2025-09-27 21:51:37.826014 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-27 21:51:37.826108 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-27 21:51:37.826125 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-27 21:51:37.826153 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-27 21:51:37.826172 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-27 21:51:37.826190 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-27 21:51:37.826208 | orchestrator | 2025-09-27 21:51:37.826228 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-27 21:51:37.826247 | orchestrator | Saturday 27 September 2025 21:51:23 +0000 (0:00:01.348) 0:00:06.296 **** 2025-09-27 21:51:37.826267 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:51:37.826288 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:51:37.826308 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:51:37.826328 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:51:37.826347 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:51:37.826367 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:51:37.826386 | orchestrator | 2025-09-27 21:51:37.826407 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-27 21:51:37.826426 | orchestrator | Saturday 27 September 2025 21:51:27 +0000 (0:00:03.692) 0:00:09.988 **** 2025-09-27 21:51:37.826446 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:51:37.826466 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:51:37.826485 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:51:37.826503 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:51:37.826521 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:51:37.826541 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:51:37.826618 | orchestrator | 2025-09-27 21:51:37.826641 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-27 21:51:37.826661 | orchestrator | 2025-09-27 21:51:37.826682 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-27 21:51:37.826701 | orchestrator | Saturday 27 September 2025 21:51:28 +0000 (0:00:00.607) 0:00:10.596 **** 2025-09-27 21:51:37.826722 | orchestrator | changed: [testbed-manager] 2025-09-27 21:51:37.826742 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:51:37.826762 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:51:37.826782 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:51:37.826803 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:51:37.826823 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:51:37.826843 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:51:37.826862 | orchestrator | 2025-09-27 21:51:37.826880 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-27 21:51:37.826897 | orchestrator | Saturday 27 September 2025 21:51:29 +0000 (0:00:01.494) 0:00:12.091 **** 2025-09-27 21:51:37.826916 | orchestrator | changed: [testbed-manager] 2025-09-27 21:51:37.826935 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:51:37.826955 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:51:37.826973 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:51:37.826991 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:51:37.827009 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:51:37.827055 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:51:37.827074 | orchestrator | 2025-09-27 21:51:37.827093 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-27 21:51:37.827113 | orchestrator | Saturday 27 September 2025 21:51:31 +0000 (0:00:01.699) 0:00:13.790 **** 2025-09-27 21:51:37.827133 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:51:37.827153 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:51:37.827174 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:51:37.827194 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:51:37.827214 | orchestrator | ok: [testbed-manager] 2025-09-27 21:51:37.827234 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:51:37.827254 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:51:37.827274 | orchestrator | 2025-09-27 21:51:37.827294 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-27 21:51:37.827314 | orchestrator | Saturday 27 September 2025 21:51:32 +0000 (0:00:01.456) 0:00:15.247 **** 2025-09-27 21:51:37.827335 | orchestrator | changed: [testbed-manager] 2025-09-27 21:51:37.827355 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:51:37.827375 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:51:37.827395 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:51:37.827413 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:51:37.827435 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:51:37.827455 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:51:37.827475 | orchestrator | 2025-09-27 21:51:37.827495 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-27 21:51:37.827515 | orchestrator | Saturday 27 September 2025 21:51:34 +0000 (0:00:01.761) 0:00:17.008 **** 2025-09-27 21:51:37.827535 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:51:37.827556 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:51:37.827643 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:51:37.827666 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:51:37.827686 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:51:37.827706 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:51:37.827726 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:51:37.827746 | orchestrator | 2025-09-27 21:51:37.827764 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-27 21:51:37.827780 | orchestrator | 2025-09-27 21:51:37.827796 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-27 21:51:37.827812 | orchestrator | Saturday 27 September 2025 21:51:35 +0000 (0:00:00.632) 0:00:17.641 **** 2025-09-27 21:51:37.827827 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:51:37.827855 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:51:37.827873 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:51:37.827896 | orchestrator | ok: [testbed-manager] 2025-09-27 21:51:37.827914 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:51:37.827929 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:51:37.827946 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:51:37.827962 | orchestrator | 2025-09-27 21:51:37.827980 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:51:37.828000 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:51:37.828019 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:51:37.828037 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:51:37.828054 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:51:37.828072 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:51:37.828089 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:51:37.828107 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:51:37.828123 | orchestrator | 2025-09-27 21:51:37.828139 | orchestrator | 2025-09-27 21:51:37.828156 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:51:37.828174 | orchestrator | Saturday 27 September 2025 21:51:37 +0000 (0:00:02.643) 0:00:20.284 **** 2025-09-27 21:51:37.828192 | orchestrator | =============================================================================== 2025-09-27 21:51:37.828207 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.69s 2025-09-27 21:51:37.828223 | orchestrator | Install python3-docker -------------------------------------------------- 2.64s 2025-09-27 21:51:37.828237 | orchestrator | Apply netplan configuration --------------------------------------------- 2.40s 2025-09-27 21:51:37.828254 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.76s 2025-09-27 21:51:37.828270 | orchestrator | Apply netplan configuration --------------------------------------------- 1.72s 2025-09-27 21:51:37.828287 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.70s 2025-09-27 21:51:37.828305 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.49s 2025-09-27 21:51:37.828322 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.46s 2025-09-27 21:51:37.828338 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.35s 2025-09-27 21:51:37.828354 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.70s 2025-09-27 21:51:37.828386 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2025-09-27 21:51:37.828422 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.61s 2025-09-27 21:51:38.487687 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-27 21:51:50.531992 | orchestrator | 2025-09-27 21:51:50 | INFO  | Task 042d64e8-c237-459e-a037-a09b03f39878 (reboot) was prepared for execution. 2025-09-27 21:51:50.532107 | orchestrator | 2025-09-27 21:51:50 | INFO  | It takes a moment until task 042d64e8-c237-459e-a037-a09b03f39878 (reboot) has been started and output is visible here. 2025-09-27 21:52:00.508733 | orchestrator | 2025-09-27 21:52:00.508842 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-27 21:52:00.508886 | orchestrator | 2025-09-27 21:52:00.508898 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-27 21:52:00.508910 | orchestrator | Saturday 27 September 2025 21:51:54 +0000 (0:00:00.214) 0:00:00.214 **** 2025-09-27 21:52:00.508921 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:52:00.508933 | orchestrator | 2025-09-27 21:52:00.508944 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-27 21:52:00.508955 | orchestrator | Saturday 27 September 2025 21:51:54 +0000 (0:00:00.107) 0:00:00.322 **** 2025-09-27 21:52:00.508967 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:52:00.508977 | orchestrator | 2025-09-27 21:52:00.508988 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-27 21:52:00.509000 | orchestrator | Saturday 27 September 2025 21:51:55 +0000 (0:00:00.960) 0:00:01.283 **** 2025-09-27 21:52:00.509010 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:52:00.509021 | orchestrator | 2025-09-27 21:52:00.509032 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-27 21:52:00.509043 | orchestrator | 2025-09-27 21:52:00.509053 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-27 21:52:00.509064 | orchestrator | Saturday 27 September 2025 21:51:55 +0000 (0:00:00.104) 0:00:01.388 **** 2025-09-27 21:52:00.509075 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:52:00.509085 | orchestrator | 2025-09-27 21:52:00.509096 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-27 21:52:00.509107 | orchestrator | Saturday 27 September 2025 21:51:55 +0000 (0:00:00.097) 0:00:01.485 **** 2025-09-27 21:52:00.509118 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:52:00.509128 | orchestrator | 2025-09-27 21:52:00.509139 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-27 21:52:00.509150 | orchestrator | Saturday 27 September 2025 21:51:56 +0000 (0:00:00.647) 0:00:02.133 **** 2025-09-27 21:52:00.509160 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:52:00.509171 | orchestrator | 2025-09-27 21:52:00.509182 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-27 21:52:00.509192 | orchestrator | 2025-09-27 21:52:00.509203 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-27 21:52:00.509214 | orchestrator | Saturday 27 September 2025 21:51:56 +0000 (0:00:00.130) 0:00:02.263 **** 2025-09-27 21:52:00.509224 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:52:00.509235 | orchestrator | 2025-09-27 21:52:00.509260 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-27 21:52:00.509271 | orchestrator | Saturday 27 September 2025 21:51:56 +0000 (0:00:00.213) 0:00:02.477 **** 2025-09-27 21:52:00.509282 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:52:00.509293 | orchestrator | 2025-09-27 21:52:00.509304 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-27 21:52:00.509314 | orchestrator | Saturday 27 September 2025 21:51:57 +0000 (0:00:00.645) 0:00:03.123 **** 2025-09-27 21:52:00.509325 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:52:00.509336 | orchestrator | 2025-09-27 21:52:00.509347 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-27 21:52:00.509357 | orchestrator | 2025-09-27 21:52:00.509368 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-27 21:52:00.509379 | orchestrator | Saturday 27 September 2025 21:51:57 +0000 (0:00:00.119) 0:00:03.242 **** 2025-09-27 21:52:00.509390 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:52:00.509400 | orchestrator | 2025-09-27 21:52:00.509411 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-27 21:52:00.509422 | orchestrator | Saturday 27 September 2025 21:51:57 +0000 (0:00:00.097) 0:00:03.339 **** 2025-09-27 21:52:00.509433 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:52:00.509444 | orchestrator | 2025-09-27 21:52:00.509455 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-27 21:52:00.509475 | orchestrator | Saturday 27 September 2025 21:51:58 +0000 (0:00:00.642) 0:00:03.982 **** 2025-09-27 21:52:00.509486 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:52:00.509497 | orchestrator | 2025-09-27 21:52:00.509507 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-27 21:52:00.509518 | orchestrator | 2025-09-27 21:52:00.509529 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-27 21:52:00.509563 | orchestrator | Saturday 27 September 2025 21:51:58 +0000 (0:00:00.126) 0:00:04.108 **** 2025-09-27 21:52:00.509575 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:52:00.509586 | orchestrator | 2025-09-27 21:52:00.509597 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-27 21:52:00.509607 | orchestrator | Saturday 27 September 2025 21:51:58 +0000 (0:00:00.109) 0:00:04.218 **** 2025-09-27 21:52:00.509618 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:52:00.509629 | orchestrator | 2025-09-27 21:52:00.509639 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-27 21:52:00.509650 | orchestrator | Saturday 27 September 2025 21:51:59 +0000 (0:00:00.668) 0:00:04.886 **** 2025-09-27 21:52:00.509661 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:52:00.509671 | orchestrator | 2025-09-27 21:52:00.509682 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-27 21:52:00.509693 | orchestrator | 2025-09-27 21:52:00.509704 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-27 21:52:00.509714 | orchestrator | Saturday 27 September 2025 21:51:59 +0000 (0:00:00.115) 0:00:05.002 **** 2025-09-27 21:52:00.509725 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:52:00.509736 | orchestrator | 2025-09-27 21:52:00.509746 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-27 21:52:00.509757 | orchestrator | Saturday 27 September 2025 21:51:59 +0000 (0:00:00.088) 0:00:05.090 **** 2025-09-27 21:52:00.509768 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:52:00.509778 | orchestrator | 2025-09-27 21:52:00.509789 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-27 21:52:00.509800 | orchestrator | Saturday 27 September 2025 21:52:00 +0000 (0:00:00.715) 0:00:05.805 **** 2025-09-27 21:52:00.509827 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:52:00.509838 | orchestrator | 2025-09-27 21:52:00.509849 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:52:00.509861 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:52:00.509873 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:52:00.509884 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:52:00.509895 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:52:00.509906 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:52:00.509917 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:52:00.509927 | orchestrator | 2025-09-27 21:52:00.509938 | orchestrator | 2025-09-27 21:52:00.509949 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:52:00.509959 | orchestrator | Saturday 27 September 2025 21:52:00 +0000 (0:00:00.033) 0:00:05.838 **** 2025-09-27 21:52:00.509970 | orchestrator | =============================================================================== 2025-09-27 21:52:00.509981 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.28s 2025-09-27 21:52:00.509999 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.71s 2025-09-27 21:52:00.510010 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.63s 2025-09-27 21:52:00.781002 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-27 21:52:12.844809 | orchestrator | 2025-09-27 21:52:12 | INFO  | Task d7646349-fdf6-4a7c-9584-20ea4a507a2a (wait-for-connection) was prepared for execution. 2025-09-27 21:52:12.844902 | orchestrator | 2025-09-27 21:52:12 | INFO  | It takes a moment until task d7646349-fdf6-4a7c-9584-20ea4a507a2a (wait-for-connection) has been started and output is visible here. 2025-09-27 21:52:28.501413 | orchestrator | 2025-09-27 21:52:28.501563 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-27 21:52:28.501581 | orchestrator | 2025-09-27 21:52:28.501593 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-27 21:52:28.501605 | orchestrator | Saturday 27 September 2025 21:52:16 +0000 (0:00:00.235) 0:00:00.235 **** 2025-09-27 21:52:28.501616 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:52:28.501628 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:52:28.501639 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:52:28.501650 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:52:28.501661 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:52:28.501671 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:52:28.501682 | orchestrator | 2025-09-27 21:52:28.501693 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:52:28.501704 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:52:28.501717 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:52:28.501728 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:52:28.501739 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:52:28.501750 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:52:28.501761 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:52:28.501771 | orchestrator | 2025-09-27 21:52:28.501782 | orchestrator | 2025-09-27 21:52:28.501793 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:52:28.501804 | orchestrator | Saturday 27 September 2025 21:52:28 +0000 (0:00:11.574) 0:00:11.810 **** 2025-09-27 21:52:28.501815 | orchestrator | =============================================================================== 2025-09-27 21:52:28.501825 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.57s 2025-09-27 21:52:28.690129 | orchestrator | + osism apply hddtemp 2025-09-27 21:52:40.395259 | orchestrator | 2025-09-27 21:52:40 | INFO  | Task 398e654e-f0d7-4d45-88f0-b09fd1f38772 (hddtemp) was prepared for execution. 2025-09-27 21:52:40.395377 | orchestrator | 2025-09-27 21:52:40 | INFO  | It takes a moment until task 398e654e-f0d7-4d45-88f0-b09fd1f38772 (hddtemp) has been started and output is visible here. 2025-09-27 21:53:07.049839 | orchestrator | 2025-09-27 21:53:07.049958 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-27 21:53:07.049976 | orchestrator | 2025-09-27 21:53:07.049988 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-27 21:53:07.049999 | orchestrator | Saturday 27 September 2025 21:52:44 +0000 (0:00:00.233) 0:00:00.233 **** 2025-09-27 21:53:07.050011 | orchestrator | ok: [testbed-manager] 2025-09-27 21:53:07.050131 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:53:07.050172 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:53:07.050184 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:53:07.050195 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:53:07.050205 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:53:07.050216 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:53:07.050227 | orchestrator | 2025-09-27 21:53:07.050238 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-27 21:53:07.050249 | orchestrator | Saturday 27 September 2025 21:52:44 +0000 (0:00:00.528) 0:00:00.762 **** 2025-09-27 21:53:07.050262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:53:07.050275 | orchestrator | 2025-09-27 21:53:07.050286 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-27 21:53:07.050297 | orchestrator | Saturday 27 September 2025 21:52:45 +0000 (0:00:00.898) 0:00:01.660 **** 2025-09-27 21:53:07.050308 | orchestrator | ok: [testbed-manager] 2025-09-27 21:53:07.050318 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:53:07.050329 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:53:07.050340 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:53:07.050351 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:53:07.050363 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:53:07.050375 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:53:07.050387 | orchestrator | 2025-09-27 21:53:07.050400 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-27 21:53:07.050413 | orchestrator | Saturday 27 September 2025 21:52:47 +0000 (0:00:01.915) 0:00:03.576 **** 2025-09-27 21:53:07.050425 | orchestrator | changed: [testbed-manager] 2025-09-27 21:53:07.050439 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:53:07.050451 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:53:07.050463 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:53:07.050479 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:53:07.050585 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:53:07.050606 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:53:07.050623 | orchestrator | 2025-09-27 21:53:07.050657 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-27 21:53:07.050695 | orchestrator | Saturday 27 September 2025 21:52:48 +0000 (0:00:01.012) 0:00:04.589 **** 2025-09-27 21:53:07.050716 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:53:07.050736 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:53:07.050753 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:53:07.050769 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:53:07.050780 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:53:07.050791 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:53:07.050801 | orchestrator | ok: [testbed-manager] 2025-09-27 21:53:07.050812 | orchestrator | 2025-09-27 21:53:07.050822 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-27 21:53:07.050833 | orchestrator | Saturday 27 September 2025 21:52:50 +0000 (0:00:01.945) 0:00:06.535 **** 2025-09-27 21:53:07.050844 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:53:07.050854 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:53:07.050865 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:53:07.050875 | orchestrator | changed: [testbed-manager] 2025-09-27 21:53:07.050886 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:53:07.050897 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:53:07.050907 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:53:07.050918 | orchestrator | 2025-09-27 21:53:07.050928 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-27 21:53:07.050939 | orchestrator | Saturday 27 September 2025 21:52:50 +0000 (0:00:00.688) 0:00:07.223 **** 2025-09-27 21:53:07.050949 | orchestrator | changed: [testbed-manager] 2025-09-27 21:53:07.050960 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:53:07.050971 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:53:07.050994 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:53:07.051005 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:53:07.051016 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:53:07.051026 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:53:07.051037 | orchestrator | 2025-09-27 21:53:07.051048 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-27 21:53:07.051059 | orchestrator | Saturday 27 September 2025 21:53:03 +0000 (0:00:12.729) 0:00:19.953 **** 2025-09-27 21:53:07.051070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:53:07.051081 | orchestrator | 2025-09-27 21:53:07.051092 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-27 21:53:07.051103 | orchestrator | Saturday 27 September 2025 21:53:04 +0000 (0:00:01.171) 0:00:21.125 **** 2025-09-27 21:53:07.051113 | orchestrator | changed: [testbed-manager] 2025-09-27 21:53:07.051124 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:53:07.051134 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:53:07.051145 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:53:07.051155 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:53:07.051166 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:53:07.051176 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:53:07.051187 | orchestrator | 2025-09-27 21:53:07.051198 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:53:07.051209 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:53:07.051244 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:53:07.051256 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:53:07.051267 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:53:07.051278 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:53:07.051289 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:53:07.051299 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:53:07.051310 | orchestrator | 2025-09-27 21:53:07.051321 | orchestrator | 2025-09-27 21:53:07.051331 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:53:07.051342 | orchestrator | Saturday 27 September 2025 21:53:06 +0000 (0:00:01.757) 0:00:22.882 **** 2025-09-27 21:53:07.051353 | orchestrator | =============================================================================== 2025-09-27 21:53:07.051364 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.73s 2025-09-27 21:53:07.051374 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.95s 2025-09-27 21:53:07.051385 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.92s 2025-09-27 21:53:07.051396 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.76s 2025-09-27 21:53:07.051406 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.17s 2025-09-27 21:53:07.051417 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.01s 2025-09-27 21:53:07.051427 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.90s 2025-09-27 21:53:07.051446 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.69s 2025-09-27 21:53:07.051462 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.53s 2025-09-27 21:53:07.330922 | orchestrator | ++ semver latest 7.1.1 2025-09-27 21:53:07.379982 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-27 21:53:07.380054 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-27 21:53:07.380068 | orchestrator | + sudo systemctl restart manager.service 2025-09-27 21:53:20.915868 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-27 21:53:20.915989 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-27 21:53:20.916006 | orchestrator | + local max_attempts=60 2025-09-27 21:53:20.916019 | orchestrator | + local name=ceph-ansible 2025-09-27 21:53:20.916030 | orchestrator | + local attempt_num=1 2025-09-27 21:53:20.916042 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:53:20.961773 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:53:20.962094 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:53:20.962192 | orchestrator | + sleep 5 2025-09-27 21:53:25.966678 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:53:26.009774 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:53:26.009852 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:53:26.009865 | orchestrator | + sleep 5 2025-09-27 21:53:31.014318 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:53:31.053329 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:53:31.053415 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:53:31.053425 | orchestrator | + sleep 5 2025-09-27 21:53:36.058447 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:53:36.101775 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:53:36.101844 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:53:36.101857 | orchestrator | + sleep 5 2025-09-27 21:53:41.106512 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:53:41.151234 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:53:41.151313 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:53:41.151327 | orchestrator | + sleep 5 2025-09-27 21:53:46.156888 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:53:46.199077 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:53:46.199134 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:53:46.199448 | orchestrator | + sleep 5 2025-09-27 21:53:51.205068 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:53:51.242154 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:53:51.242216 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:53:51.242227 | orchestrator | + sleep 5 2025-09-27 21:53:56.247123 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:53:56.296389 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-27 21:53:56.296467 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:53:56.296481 | orchestrator | + sleep 5 2025-09-27 21:54:01.298766 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:54:01.321818 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-27 21:54:01.321906 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:54:01.321921 | orchestrator | + sleep 5 2025-09-27 21:54:06.324818 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:54:06.360004 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-27 21:54:06.360075 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:54:06.360088 | orchestrator | + sleep 5 2025-09-27 21:54:11.363747 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:54:11.399086 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-27 21:54:11.399194 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:54:11.399212 | orchestrator | + sleep 5 2025-09-27 21:54:16.404387 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:54:16.438778 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-27 21:54:16.438866 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:54:16.438881 | orchestrator | + sleep 5 2025-09-27 21:54:21.443797 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:54:21.474786 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-27 21:54:21.474899 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:54:21.474924 | orchestrator | + sleep 5 2025-09-27 21:54:26.478090 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:54:26.517365 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:54:26.517600 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-27 21:54:26.517616 | orchestrator | + local max_attempts=60 2025-09-27 21:54:26.517668 | orchestrator | + local name=kolla-ansible 2025-09-27 21:54:26.517681 | orchestrator | + local attempt_num=1 2025-09-27 21:54:26.517705 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-27 21:54:26.554322 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:54:26.554401 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-27 21:54:26.554439 | orchestrator | + local max_attempts=60 2025-09-27 21:54:26.554452 | orchestrator | + local name=osism-ansible 2025-09-27 21:54:26.554456 | orchestrator | + local attempt_num=1 2025-09-27 21:54:26.555506 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-27 21:54:26.588866 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:54:26.588907 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-27 21:54:26.588913 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-27 21:54:26.764899 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-27 21:54:26.933073 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-27 21:54:27.091910 | orchestrator | ARA in osism-ansible already disabled. 2025-09-27 21:54:27.260597 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-27 21:54:27.261575 | orchestrator | + osism apply gather-facts 2025-09-27 21:54:39.372700 | orchestrator | 2025-09-27 21:54:39 | INFO  | Task ffda5cf1-8920-430b-b71f-eda762c3c6a3 (gather-facts) was prepared for execution. 2025-09-27 21:54:39.372821 | orchestrator | 2025-09-27 21:54:39 | INFO  | It takes a moment until task ffda5cf1-8920-430b-b71f-eda762c3c6a3 (gather-facts) has been started and output is visible here. 2025-09-27 21:54:52.033424 | orchestrator | 2025-09-27 21:54:52.033560 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-27 21:54:52.033581 | orchestrator | 2025-09-27 21:54:52.033595 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-27 21:54:52.033609 | orchestrator | Saturday 27 September 2025 21:54:42 +0000 (0:00:00.202) 0:00:00.202 **** 2025-09-27 21:54:52.033622 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:54:52.033637 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:54:52.033650 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:54:52.033684 | orchestrator | ok: [testbed-manager] 2025-09-27 21:54:52.033700 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:54:52.033714 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:54:52.033726 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:54:52.033739 | orchestrator | 2025-09-27 21:54:52.033752 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-27 21:54:52.033766 | orchestrator | 2025-09-27 21:54:52.033801 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-27 21:54:52.033815 | orchestrator | Saturday 27 September 2025 21:54:51 +0000 (0:00:08.120) 0:00:08.322 **** 2025-09-27 21:54:52.033830 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:54:52.033846 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:54:52.033860 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:54:52.033874 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:54:52.033888 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:54:52.033900 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:54:52.033912 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:54:52.033920 | orchestrator | 2025-09-27 21:54:52.033928 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:54:52.033937 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:54:52.033948 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:54:52.033980 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:54:52.033990 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:54:52.033999 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:54:52.034009 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:54:52.034060 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:54:52.034068 | orchestrator | 2025-09-27 21:54:52.034076 | orchestrator | 2025-09-27 21:54:52.034084 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:54:52.034095 | orchestrator | Saturday 27 September 2025 21:54:51 +0000 (0:00:00.525) 0:00:08.848 **** 2025-09-27 21:54:52.034108 | orchestrator | =============================================================================== 2025-09-27 21:54:52.034121 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.12s 2025-09-27 21:54:52.034134 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-09-27 21:54:52.377453 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-27 21:54:52.394577 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-27 21:54:52.406567 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-27 21:54:52.416337 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-27 21:54:52.426803 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-27 21:54:52.438829 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-27 21:54:52.450061 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-27 21:54:52.460492 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-27 21:54:52.471138 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-27 21:54:52.480796 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-27 21:54:52.495506 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-27 21:54:52.517631 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-27 21:54:52.535923 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-27 21:54:52.554334 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-27 21:54:52.571067 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-27 21:54:52.583753 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-27 21:54:52.594254 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-27 21:54:52.609025 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-27 21:54:52.625992 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-27 21:54:52.639783 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-27 21:54:52.656755 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-27 21:54:53.072660 | orchestrator | ok: Runtime: 0:22:29.362997 2025-09-27 21:54:53.168542 | 2025-09-27 21:54:53.168692 | TASK [Deploy services] 2025-09-27 21:54:53.700590 | orchestrator | skipping: Conditional result was False 2025-09-27 21:54:53.718603 | 2025-09-27 21:54:53.718744 | TASK [Deploy in a nutshell] 2025-09-27 21:54:54.399458 | orchestrator | + set -e 2025-09-27 21:54:54.399783 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-27 21:54:54.399812 | orchestrator | ++ export INTERACTIVE=false 2025-09-27 21:54:54.399834 | orchestrator | ++ INTERACTIVE=false 2025-09-27 21:54:54.399848 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-27 21:54:54.399861 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-27 21:54:54.399889 | orchestrator | + source /opt/manager-vars.sh 2025-09-27 21:54:54.399937 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-27 21:54:54.399965 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-27 21:54:54.399979 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-27 21:54:54.399995 | orchestrator | ++ CEPH_VERSION=reef 2025-09-27 21:54:54.400007 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-27 21:54:54.400025 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-27 21:54:54.400036 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-27 21:54:54.400057 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-27 21:54:54.400067 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-27 21:54:54.400082 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-27 21:54:54.400092 | orchestrator | ++ export ARA=false 2025-09-27 21:54:54.400104 | orchestrator | ++ ARA=false 2025-09-27 21:54:54.400114 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-27 21:54:54.400130 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-27 21:54:54.400141 | orchestrator | ++ export TEMPEST=false 2025-09-27 21:54:54.400151 | orchestrator | ++ TEMPEST=false 2025-09-27 21:54:54.400162 | orchestrator | ++ export IS_ZUUL=true 2025-09-27 21:54:54.400173 | orchestrator | ++ IS_ZUUL=true 2025-09-27 21:54:54.400183 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.173 2025-09-27 21:54:54.400195 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.173 2025-09-27 21:54:54.400206 | orchestrator | ++ export EXTERNAL_API=false 2025-09-27 21:54:54.400221 | orchestrator | ++ EXTERNAL_API=false 2025-09-27 21:54:54.400239 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-27 21:54:54.400258 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-27 21:54:54.400276 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-27 21:54:54.400295 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-27 21:54:54.400313 | orchestrator | 2025-09-27 21:54:54.400333 | orchestrator | # PULL IMAGES 2025-09-27 21:54:54.400352 | orchestrator | 2025-09-27 21:54:54.400370 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-27 21:54:54.400398 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-27 21:54:54.400417 | orchestrator | + echo 2025-09-27 21:54:54.400430 | orchestrator | + echo '# PULL IMAGES' 2025-09-27 21:54:54.400440 | orchestrator | + echo 2025-09-27 21:54:54.401175 | orchestrator | ++ semver latest 7.0.0 2025-09-27 21:54:54.455571 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-27 21:54:54.455689 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-27 21:54:54.455716 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-27 21:54:56.087621 | orchestrator | 2025-09-27 21:54:56 | INFO  | Trying to run play pull-images in environment custom 2025-09-27 21:55:06.181315 | orchestrator | 2025-09-27 21:55:06 | INFO  | Task f4ba9ec6-5c60-4007-b413-ab292a60fb03 (pull-images) was prepared for execution. 2025-09-27 21:55:06.181441 | orchestrator | 2025-09-27 21:55:06 | INFO  | Task f4ba9ec6-5c60-4007-b413-ab292a60fb03 is running in background. No more output. Check ARA for logs. 2025-09-27 21:55:08.146772 | orchestrator | 2025-09-27 21:55:08 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-27 21:55:18.307693 | orchestrator | 2025-09-27 21:55:18 | INFO  | Task 8b89f07a-3960-4a60-b082-09ba4f16b643 (wipe-partitions) was prepared for execution. 2025-09-27 21:55:18.307843 | orchestrator | 2025-09-27 21:55:18 | INFO  | It takes a moment until task 8b89f07a-3960-4a60-b082-09ba4f16b643 (wipe-partitions) has been started and output is visible here. 2025-09-27 21:55:31.689184 | orchestrator | 2025-09-27 21:55:31.689304 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-27 21:55:31.689320 | orchestrator | 2025-09-27 21:55:31.689332 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-27 21:55:31.689350 | orchestrator | Saturday 27 September 2025 21:55:22 +0000 (0:00:00.135) 0:00:00.135 **** 2025-09-27 21:55:31.689364 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:55:31.689376 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:55:31.689387 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:55:31.689398 | orchestrator | 2025-09-27 21:55:31.689409 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-27 21:55:31.689448 | orchestrator | Saturday 27 September 2025 21:55:23 +0000 (0:00:00.586) 0:00:00.721 **** 2025-09-27 21:55:31.689460 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:55:31.689471 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:55:31.689486 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:55:31.689497 | orchestrator | 2025-09-27 21:55:31.689508 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-27 21:55:31.689519 | orchestrator | Saturday 27 September 2025 21:55:23 +0000 (0:00:00.228) 0:00:00.950 **** 2025-09-27 21:55:31.689530 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:55:31.689541 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:55:31.689552 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:55:31.689563 | orchestrator | 2025-09-27 21:55:31.689574 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-27 21:55:31.689585 | orchestrator | Saturday 27 September 2025 21:55:24 +0000 (0:00:00.700) 0:00:01.650 **** 2025-09-27 21:55:31.689595 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:55:31.689606 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:55:31.689617 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:55:31.689627 | orchestrator | 2025-09-27 21:55:31.689638 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-27 21:55:31.689649 | orchestrator | Saturday 27 September 2025 21:55:24 +0000 (0:00:00.259) 0:00:01.909 **** 2025-09-27 21:55:31.689660 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-27 21:55:31.689675 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-27 21:55:31.689689 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-27 21:55:31.689701 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-27 21:55:31.689713 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-27 21:55:31.689752 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-27 21:55:31.689765 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-27 21:55:31.689777 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-27 21:55:31.689789 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-27 21:55:31.689801 | orchestrator | 2025-09-27 21:55:31.689813 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-27 21:55:31.689826 | orchestrator | Saturday 27 September 2025 21:55:26 +0000 (0:00:02.189) 0:00:04.099 **** 2025-09-27 21:55:31.689839 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-27 21:55:31.689851 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-27 21:55:31.689863 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-27 21:55:31.689875 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-27 21:55:31.689887 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-27 21:55:31.689899 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-27 21:55:31.689911 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-27 21:55:31.689923 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-27 21:55:31.689935 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-27 21:55:31.689946 | orchestrator | 2025-09-27 21:55:31.689958 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-27 21:55:31.689971 | orchestrator | Saturday 27 September 2025 21:55:27 +0000 (0:00:01.300) 0:00:05.399 **** 2025-09-27 21:55:31.689984 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-27 21:55:31.689995 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-27 21:55:31.690006 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-27 21:55:31.690126 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-27 21:55:31.690143 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-27 21:55:31.690162 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-27 21:55:31.690173 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-27 21:55:31.690194 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-27 21:55:31.690206 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-27 21:55:31.690216 | orchestrator | 2025-09-27 21:55:31.690245 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-27 21:55:31.690256 | orchestrator | Saturday 27 September 2025 21:55:30 +0000 (0:00:02.273) 0:00:07.672 **** 2025-09-27 21:55:31.690267 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:55:31.690277 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:55:31.690288 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:55:31.690299 | orchestrator | 2025-09-27 21:55:31.690310 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-27 21:55:31.690321 | orchestrator | Saturday 27 September 2025 21:55:30 +0000 (0:00:00.596) 0:00:08.269 **** 2025-09-27 21:55:31.690332 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:55:31.690342 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:55:31.690353 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:55:31.690364 | orchestrator | 2025-09-27 21:55:31.690375 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:55:31.690388 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:55:31.690401 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:55:31.690430 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:55:31.690442 | orchestrator | 2025-09-27 21:55:31.690452 | orchestrator | 2025-09-27 21:55:31.690464 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:55:31.690475 | orchestrator | Saturday 27 September 2025 21:55:31 +0000 (0:00:00.612) 0:00:08.882 **** 2025-09-27 21:55:31.690486 | orchestrator | =============================================================================== 2025-09-27 21:55:31.690496 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.27s 2025-09-27 21:55:31.690507 | orchestrator | Check device availability ----------------------------------------------- 2.19s 2025-09-27 21:55:31.690518 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.30s 2025-09-27 21:55:31.690529 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.70s 2025-09-27 21:55:31.690539 | orchestrator | Request device events from the kernel ----------------------------------- 0.61s 2025-09-27 21:55:31.690550 | orchestrator | Reload udev rules ------------------------------------------------------- 0.60s 2025-09-27 21:55:31.690560 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2025-09-27 21:55:31.690572 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2025-09-27 21:55:31.690583 | orchestrator | Remove all rook related logical devices --------------------------------- 0.23s 2025-09-27 21:55:43.945016 | orchestrator | 2025-09-27 21:55:43 | INFO  | Task 3fd1dbf5-be62-4c64-a17f-db4d815cc375 (facts) was prepared for execution. 2025-09-27 21:55:43.945107 | orchestrator | 2025-09-27 21:55:43 | INFO  | It takes a moment until task 3fd1dbf5-be62-4c64-a17f-db4d815cc375 (facts) has been started and output is visible here. 2025-09-27 21:55:56.282259 | orchestrator | 2025-09-27 21:55:56.283325 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-27 21:55:56.283367 | orchestrator | 2025-09-27 21:55:56.283380 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-27 21:55:56.283391 | orchestrator | Saturday 27 September 2025 21:55:47 +0000 (0:00:00.230) 0:00:00.230 **** 2025-09-27 21:55:56.283401 | orchestrator | ok: [testbed-manager] 2025-09-27 21:55:56.283413 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:55:56.283423 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:55:56.283460 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:55:56.283471 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:55:56.283480 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:55:56.283490 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:55:56.283500 | orchestrator | 2025-09-27 21:55:56.283626 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-27 21:55:56.283637 | orchestrator | Saturday 27 September 2025 21:55:48 +0000 (0:00:00.876) 0:00:01.106 **** 2025-09-27 21:55:56.283646 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:55:56.283657 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:55:56.283668 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:55:56.283677 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:55:56.283687 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:55:56.283696 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:55:56.283706 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:55:56.283715 | orchestrator | 2025-09-27 21:55:56.283725 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-27 21:55:56.283735 | orchestrator | 2025-09-27 21:55:56.283829 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-27 21:55:56.283841 | orchestrator | Saturday 27 September 2025 21:55:49 +0000 (0:00:01.067) 0:00:02.174 **** 2025-09-27 21:55:56.283850 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:55:56.283860 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:55:56.283870 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:55:56.283880 | orchestrator | ok: [testbed-manager] 2025-09-27 21:55:56.283889 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:55:56.283899 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:55:56.283908 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:55:56.283918 | orchestrator | 2025-09-27 21:55:56.283928 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-27 21:55:56.283937 | orchestrator | 2025-09-27 21:55:56.283947 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-27 21:55:56.283975 | orchestrator | Saturday 27 September 2025 21:55:55 +0000 (0:00:05.638) 0:00:07.812 **** 2025-09-27 21:55:56.283985 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:55:56.283995 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:55:56.284004 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:55:56.284013 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:55:56.284023 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:55:56.284032 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:55:56.284042 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:55:56.284051 | orchestrator | 2025-09-27 21:55:56.284061 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:55:56.284071 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:55:56.284082 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:55:56.284144 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:55:56.284154 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:55:56.284164 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:55:56.284174 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:55:56.284253 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:55:56.284266 | orchestrator | 2025-09-27 21:55:56.284286 | orchestrator | 2025-09-27 21:55:56.284296 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:55:56.284305 | orchestrator | Saturday 27 September 2025 21:55:55 +0000 (0:00:00.489) 0:00:08.301 **** 2025-09-27 21:55:56.284315 | orchestrator | =============================================================================== 2025-09-27 21:55:56.284324 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.64s 2025-09-27 21:55:56.284334 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.07s 2025-09-27 21:55:56.284343 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.88s 2025-09-27 21:55:56.284353 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2025-09-27 21:55:58.525982 | orchestrator | 2025-09-27 21:55:58 | INFO  | Task 784bd71a-e60b-4420-b2d6-ef139ee5cb03 (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-27 21:55:58.526156 | orchestrator | 2025-09-27 21:55:58 | INFO  | It takes a moment until task 784bd71a-e60b-4420-b2d6-ef139ee5cb03 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-27 21:56:10.234322 | orchestrator | 2025-09-27 21:56:10.234454 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-27 21:56:10.234473 | orchestrator | 2025-09-27 21:56:10.234485 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-27 21:56:10.234500 | orchestrator | Saturday 27 September 2025 21:56:02 +0000 (0:00:00.348) 0:00:00.348 **** 2025-09-27 21:56:10.234512 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-27 21:56:10.234523 | orchestrator | 2025-09-27 21:56:10.234535 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-27 21:56:10.234546 | orchestrator | Saturday 27 September 2025 21:56:02 +0000 (0:00:00.246) 0:00:00.595 **** 2025-09-27 21:56:10.234557 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:56:10.234569 | orchestrator | 2025-09-27 21:56:10.234580 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:10.234592 | orchestrator | Saturday 27 September 2025 21:56:03 +0000 (0:00:00.223) 0:00:00.819 **** 2025-09-27 21:56:10.234602 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-27 21:56:10.234614 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-27 21:56:10.234625 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-27 21:56:10.234636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-27 21:56:10.234647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-27 21:56:10.234658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-27 21:56:10.234668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-27 21:56:10.234679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-27 21:56:10.234690 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-27 21:56:10.234714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-27 21:56:10.234724 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-27 21:56:10.234745 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-27 21:56:10.234757 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-27 21:56:10.234768 | orchestrator | 2025-09-27 21:56:10.234779 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:10.234831 | orchestrator | Saturday 27 September 2025 21:56:03 +0000 (0:00:00.364) 0:00:01.183 **** 2025-09-27 21:56:10.234845 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:10.234882 | orchestrator | 2025-09-27 21:56:10.234894 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:10.234907 | orchestrator | Saturday 27 September 2025 21:56:03 +0000 (0:00:00.468) 0:00:01.652 **** 2025-09-27 21:56:10.234919 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:10.234934 | orchestrator | 2025-09-27 21:56:10.234954 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:10.234979 | orchestrator | Saturday 27 September 2025 21:56:04 +0000 (0:00:00.206) 0:00:01.858 **** 2025-09-27 21:56:10.235006 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:10.235023 | orchestrator | 2025-09-27 21:56:10.235041 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:10.235059 | orchestrator | Saturday 27 September 2025 21:56:04 +0000 (0:00:00.200) 0:00:02.059 **** 2025-09-27 21:56:10.235079 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:10.235102 | orchestrator | 2025-09-27 21:56:10.235115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:10.235127 | orchestrator | Saturday 27 September 2025 21:56:04 +0000 (0:00:00.222) 0:00:02.282 **** 2025-09-27 21:56:10.235139 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:10.235151 | orchestrator | 2025-09-27 21:56:10.235164 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:10.235176 | orchestrator | Saturday 27 September 2025 21:56:04 +0000 (0:00:00.203) 0:00:02.485 **** 2025-09-27 21:56:10.235187 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:10.235198 | orchestrator | 2025-09-27 21:56:10.235209 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:10.235220 | orchestrator | Saturday 27 September 2025 21:56:05 +0000 (0:00:00.200) 0:00:02.685 **** 2025-09-27 21:56:10.235231 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:10.235241 | orchestrator | 2025-09-27 21:56:10.235252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:10.235263 | orchestrator | Saturday 27 September 2025 21:56:05 +0000 (0:00:00.189) 0:00:02.875 **** 2025-09-27 21:56:10.235274 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:10.235285 | orchestrator | 2025-09-27 21:56:10.235296 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:10.235306 | orchestrator | Saturday 27 September 2025 21:56:05 +0000 (0:00:00.202) 0:00:03.077 **** 2025-09-27 21:56:10.235317 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124) 2025-09-27 21:56:10.235330 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124) 2025-09-27 21:56:10.235340 | orchestrator | 2025-09-27 21:56:10.235351 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:10.235362 | orchestrator | Saturday 27 September 2025 21:56:05 +0000 (0:00:00.420) 0:00:03.497 **** 2025-09-27 21:56:10.235393 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d6e45664-99ef-4d09-8a38-5c0568f04129) 2025-09-27 21:56:10.235405 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d6e45664-99ef-4d09-8a38-5c0568f04129) 2025-09-27 21:56:10.235416 | orchestrator | 2025-09-27 21:56:10.235427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:10.235438 | orchestrator | Saturday 27 September 2025 21:56:06 +0000 (0:00:00.415) 0:00:03.913 **** 2025-09-27 21:56:10.235448 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_02398e45-2b37-4a9b-beeb-c269fa72e24d) 2025-09-27 21:56:10.235459 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_02398e45-2b37-4a9b-beeb-c269fa72e24d) 2025-09-27 21:56:10.235470 | orchestrator | 2025-09-27 21:56:10.235483 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:10.235501 | orchestrator | Saturday 27 September 2025 21:56:06 +0000 (0:00:00.613) 0:00:04.526 **** 2025-09-27 21:56:10.235518 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c7c2c329-81fb-49e1-8405-12e2c9115bb9) 2025-09-27 21:56:10.235548 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c7c2c329-81fb-49e1-8405-12e2c9115bb9) 2025-09-27 21:56:10.235566 | orchestrator | 2025-09-27 21:56:10.235584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:10.235603 | orchestrator | Saturday 27 September 2025 21:56:07 +0000 (0:00:00.660) 0:00:05.186 **** 2025-09-27 21:56:10.235621 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-27 21:56:10.235640 | orchestrator | 2025-09-27 21:56:10.235659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:10.235688 | orchestrator | Saturday 27 September 2025 21:56:08 +0000 (0:00:00.697) 0:00:05.883 **** 2025-09-27 21:56:10.235709 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-27 21:56:10.235729 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-27 21:56:10.235747 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-27 21:56:10.235767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-27 21:56:10.235785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-27 21:56:10.235832 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-27 21:56:10.235852 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-27 21:56:10.235871 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-27 21:56:10.235890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-27 21:56:10.235908 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-27 21:56:10.235928 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-27 21:56:10.235948 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-27 21:56:10.235968 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-27 21:56:10.235986 | orchestrator | 2025-09-27 21:56:10.236003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:10.236022 | orchestrator | Saturday 27 September 2025 21:56:08 +0000 (0:00:00.393) 0:00:06.277 **** 2025-09-27 21:56:10.236040 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:10.236057 | orchestrator | 2025-09-27 21:56:10.236069 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:10.236079 | orchestrator | Saturday 27 September 2025 21:56:08 +0000 (0:00:00.205) 0:00:06.482 **** 2025-09-27 21:56:10.236090 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:10.236101 | orchestrator | 2025-09-27 21:56:10.236111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:10.236122 | orchestrator | Saturday 27 September 2025 21:56:09 +0000 (0:00:00.199) 0:00:06.682 **** 2025-09-27 21:56:10.236133 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:10.236144 | orchestrator | 2025-09-27 21:56:10.236155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:10.236165 | orchestrator | Saturday 27 September 2025 21:56:09 +0000 (0:00:00.226) 0:00:06.909 **** 2025-09-27 21:56:10.236176 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:10.236187 | orchestrator | 2025-09-27 21:56:10.236198 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:10.236209 | orchestrator | Saturday 27 September 2025 21:56:09 +0000 (0:00:00.208) 0:00:07.117 **** 2025-09-27 21:56:10.236219 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:10.236230 | orchestrator | 2025-09-27 21:56:10.236259 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:10.236271 | orchestrator | Saturday 27 September 2025 21:56:09 +0000 (0:00:00.199) 0:00:07.316 **** 2025-09-27 21:56:10.236281 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:10.236292 | orchestrator | 2025-09-27 21:56:10.236303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:10.236313 | orchestrator | Saturday 27 September 2025 21:56:09 +0000 (0:00:00.197) 0:00:07.514 **** 2025-09-27 21:56:10.236324 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:10.236335 | orchestrator | 2025-09-27 21:56:10.236345 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:10.236356 | orchestrator | Saturday 27 September 2025 21:56:10 +0000 (0:00:00.184) 0:00:07.698 **** 2025-09-27 21:56:10.236378 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:17.201216 | orchestrator | 2025-09-27 21:56:17.201331 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:17.201347 | orchestrator | Saturday 27 September 2025 21:56:10 +0000 (0:00:00.201) 0:00:07.900 **** 2025-09-27 21:56:17.201360 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-27 21:56:17.201373 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-27 21:56:17.201384 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-27 21:56:17.201395 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-27 21:56:17.201406 | orchestrator | 2025-09-27 21:56:17.201417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:17.201428 | orchestrator | Saturday 27 September 2025 21:56:11 +0000 (0:00:01.010) 0:00:08.910 **** 2025-09-27 21:56:17.201439 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:17.201449 | orchestrator | 2025-09-27 21:56:17.201460 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:17.201471 | orchestrator | Saturday 27 September 2025 21:56:11 +0000 (0:00:00.204) 0:00:09.115 **** 2025-09-27 21:56:17.201481 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:17.201492 | orchestrator | 2025-09-27 21:56:17.201503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:17.201514 | orchestrator | Saturday 27 September 2025 21:56:11 +0000 (0:00:00.196) 0:00:09.312 **** 2025-09-27 21:56:17.201524 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:17.201535 | orchestrator | 2025-09-27 21:56:17.201546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:17.201556 | orchestrator | Saturday 27 September 2025 21:56:11 +0000 (0:00:00.187) 0:00:09.500 **** 2025-09-27 21:56:17.201567 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:17.201577 | orchestrator | 2025-09-27 21:56:17.201588 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-27 21:56:17.201599 | orchestrator | Saturday 27 September 2025 21:56:12 +0000 (0:00:00.204) 0:00:09.704 **** 2025-09-27 21:56:17.201609 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-27 21:56:17.201620 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-27 21:56:17.201631 | orchestrator | 2025-09-27 21:56:17.201642 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-27 21:56:17.201652 | orchestrator | Saturday 27 September 2025 21:56:12 +0000 (0:00:00.169) 0:00:09.874 **** 2025-09-27 21:56:17.201682 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:17.201694 | orchestrator | 2025-09-27 21:56:17.201704 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-27 21:56:17.201715 | orchestrator | Saturday 27 September 2025 21:56:12 +0000 (0:00:00.139) 0:00:10.014 **** 2025-09-27 21:56:17.201726 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:17.201737 | orchestrator | 2025-09-27 21:56:17.201749 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-27 21:56:17.201761 | orchestrator | Saturday 27 September 2025 21:56:12 +0000 (0:00:00.140) 0:00:10.154 **** 2025-09-27 21:56:17.201773 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:17.201831 | orchestrator | 2025-09-27 21:56:17.201845 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-27 21:56:17.201857 | orchestrator | Saturday 27 September 2025 21:56:12 +0000 (0:00:00.124) 0:00:10.279 **** 2025-09-27 21:56:17.201870 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:56:17.201882 | orchestrator | 2025-09-27 21:56:17.201895 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-27 21:56:17.201907 | orchestrator | Saturday 27 September 2025 21:56:12 +0000 (0:00:00.130) 0:00:10.409 **** 2025-09-27 21:56:17.201920 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'}}) 2025-09-27 21:56:17.201933 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8d8c80c3-887a-53bd-bc85-16ee8bc68188'}}) 2025-09-27 21:56:17.201945 | orchestrator | 2025-09-27 21:56:17.201958 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-27 21:56:17.201970 | orchestrator | Saturday 27 September 2025 21:56:12 +0000 (0:00:00.173) 0:00:10.583 **** 2025-09-27 21:56:17.201983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'}})  2025-09-27 21:56:17.202004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8d8c80c3-887a-53bd-bc85-16ee8bc68188'}})  2025-09-27 21:56:17.202117 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:17.202133 | orchestrator | 2025-09-27 21:56:17.202144 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-27 21:56:17.202154 | orchestrator | Saturday 27 September 2025 21:56:13 +0000 (0:00:00.149) 0:00:10.733 **** 2025-09-27 21:56:17.202165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'}})  2025-09-27 21:56:17.202176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8d8c80c3-887a-53bd-bc85-16ee8bc68188'}})  2025-09-27 21:56:17.202187 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:17.202198 | orchestrator | 2025-09-27 21:56:17.202209 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-27 21:56:17.202219 | orchestrator | Saturday 27 September 2025 21:56:13 +0000 (0:00:00.347) 0:00:11.081 **** 2025-09-27 21:56:17.202230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'}})  2025-09-27 21:56:17.202241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8d8c80c3-887a-53bd-bc85-16ee8bc68188'}})  2025-09-27 21:56:17.202252 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:17.202262 | orchestrator | 2025-09-27 21:56:17.202293 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-27 21:56:17.202304 | orchestrator | Saturday 27 September 2025 21:56:13 +0000 (0:00:00.157) 0:00:11.238 **** 2025-09-27 21:56:17.202315 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:56:17.202326 | orchestrator | 2025-09-27 21:56:17.202336 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-27 21:56:17.202356 | orchestrator | Saturday 27 September 2025 21:56:13 +0000 (0:00:00.143) 0:00:11.381 **** 2025-09-27 21:56:17.202367 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:56:17.202377 | orchestrator | 2025-09-27 21:56:17.202388 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-27 21:56:17.202398 | orchestrator | Saturday 27 September 2025 21:56:13 +0000 (0:00:00.140) 0:00:11.522 **** 2025-09-27 21:56:17.202409 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:17.202419 | orchestrator | 2025-09-27 21:56:17.202430 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-27 21:56:17.202441 | orchestrator | Saturday 27 September 2025 21:56:13 +0000 (0:00:00.134) 0:00:11.657 **** 2025-09-27 21:56:17.202451 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:17.202462 | orchestrator | 2025-09-27 21:56:17.202482 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-27 21:56:17.202493 | orchestrator | Saturday 27 September 2025 21:56:14 +0000 (0:00:00.131) 0:00:11.788 **** 2025-09-27 21:56:17.202504 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:17.202514 | orchestrator | 2025-09-27 21:56:17.202525 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-27 21:56:17.202536 | orchestrator | Saturday 27 September 2025 21:56:14 +0000 (0:00:00.133) 0:00:11.922 **** 2025-09-27 21:56:17.202546 | orchestrator | ok: [testbed-node-3] => { 2025-09-27 21:56:17.202557 | orchestrator |  "ceph_osd_devices": { 2025-09-27 21:56:17.202568 | orchestrator |  "sdb": { 2025-09-27 21:56:17.202579 | orchestrator |  "osd_lvm_uuid": "3ef55d2f-0db9-555d-b1b6-fd7fdf57b491" 2025-09-27 21:56:17.202590 | orchestrator |  }, 2025-09-27 21:56:17.202600 | orchestrator |  "sdc": { 2025-09-27 21:56:17.202611 | orchestrator |  "osd_lvm_uuid": "8d8c80c3-887a-53bd-bc85-16ee8bc68188" 2025-09-27 21:56:17.202622 | orchestrator |  } 2025-09-27 21:56:17.202633 | orchestrator |  } 2025-09-27 21:56:17.202643 | orchestrator | } 2025-09-27 21:56:17.202654 | orchestrator | 2025-09-27 21:56:17.202665 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-27 21:56:17.202676 | orchestrator | Saturday 27 September 2025 21:56:14 +0000 (0:00:00.144) 0:00:12.067 **** 2025-09-27 21:56:17.202686 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:17.202697 | orchestrator | 2025-09-27 21:56:17.202708 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-27 21:56:17.202718 | orchestrator | Saturday 27 September 2025 21:56:14 +0000 (0:00:00.128) 0:00:12.196 **** 2025-09-27 21:56:17.202729 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:17.202739 | orchestrator | 2025-09-27 21:56:17.202750 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-27 21:56:17.202760 | orchestrator | Saturday 27 September 2025 21:56:14 +0000 (0:00:00.130) 0:00:12.326 **** 2025-09-27 21:56:17.202771 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:56:17.202782 | orchestrator | 2025-09-27 21:56:17.202792 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-27 21:56:17.202839 | orchestrator | Saturday 27 September 2025 21:56:14 +0000 (0:00:00.143) 0:00:12.469 **** 2025-09-27 21:56:17.202850 | orchestrator | changed: [testbed-node-3] => { 2025-09-27 21:56:17.202861 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-27 21:56:17.202872 | orchestrator |  "ceph_osd_devices": { 2025-09-27 21:56:17.202883 | orchestrator |  "sdb": { 2025-09-27 21:56:17.202894 | orchestrator |  "osd_lvm_uuid": "3ef55d2f-0db9-555d-b1b6-fd7fdf57b491" 2025-09-27 21:56:17.202905 | orchestrator |  }, 2025-09-27 21:56:17.202915 | orchestrator |  "sdc": { 2025-09-27 21:56:17.202926 | orchestrator |  "osd_lvm_uuid": "8d8c80c3-887a-53bd-bc85-16ee8bc68188" 2025-09-27 21:56:17.202936 | orchestrator |  } 2025-09-27 21:56:17.202947 | orchestrator |  }, 2025-09-27 21:56:17.202957 | orchestrator |  "lvm_volumes": [ 2025-09-27 21:56:17.202968 | orchestrator |  { 2025-09-27 21:56:17.202978 | orchestrator |  "data": "osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491", 2025-09-27 21:56:17.202989 | orchestrator |  "data_vg": "ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491" 2025-09-27 21:56:17.203000 | orchestrator |  }, 2025-09-27 21:56:17.203010 | orchestrator |  { 2025-09-27 21:56:17.203021 | orchestrator |  "data": "osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188", 2025-09-27 21:56:17.203031 | orchestrator |  "data_vg": "ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188" 2025-09-27 21:56:17.203042 | orchestrator |  } 2025-09-27 21:56:17.203052 | orchestrator |  ] 2025-09-27 21:56:17.203063 | orchestrator |  } 2025-09-27 21:56:17.203073 | orchestrator | } 2025-09-27 21:56:17.203084 | orchestrator | 2025-09-27 21:56:17.203094 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-27 21:56:17.203118 | orchestrator | Saturday 27 September 2025 21:56:15 +0000 (0:00:00.406) 0:00:12.876 **** 2025-09-27 21:56:17.203130 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-27 21:56:17.203140 | orchestrator | 2025-09-27 21:56:17.203151 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-27 21:56:17.203161 | orchestrator | 2025-09-27 21:56:17.203172 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-27 21:56:17.203182 | orchestrator | Saturday 27 September 2025 21:56:16 +0000 (0:00:01.570) 0:00:14.446 **** 2025-09-27 21:56:17.203193 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-27 21:56:17.203204 | orchestrator | 2025-09-27 21:56:17.203214 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-27 21:56:17.203224 | orchestrator | Saturday 27 September 2025 21:56:16 +0000 (0:00:00.219) 0:00:14.666 **** 2025-09-27 21:56:17.203235 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:56:17.203245 | orchestrator | 2025-09-27 21:56:17.203256 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:17.203274 | orchestrator | Saturday 27 September 2025 21:56:17 +0000 (0:00:00.201) 0:00:14.867 **** 2025-09-27 21:56:24.151485 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-27 21:56:24.152489 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-27 21:56:24.152534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-27 21:56:24.152555 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-27 21:56:24.152568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-27 21:56:24.152580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-27 21:56:24.152591 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-27 21:56:24.152602 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-27 21:56:24.152613 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-27 21:56:24.152624 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-27 21:56:24.152634 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-27 21:56:24.152645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-27 21:56:24.152656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-27 21:56:24.152672 | orchestrator | 2025-09-27 21:56:24.152684 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:24.152696 | orchestrator | Saturday 27 September 2025 21:56:17 +0000 (0:00:00.328) 0:00:15.195 **** 2025-09-27 21:56:24.152708 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:24.152720 | orchestrator | 2025-09-27 21:56:24.152731 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:24.152742 | orchestrator | Saturday 27 September 2025 21:56:17 +0000 (0:00:00.166) 0:00:15.361 **** 2025-09-27 21:56:24.152753 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:24.152764 | orchestrator | 2025-09-27 21:56:24.152775 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:24.152785 | orchestrator | Saturday 27 September 2025 21:56:17 +0000 (0:00:00.158) 0:00:15.520 **** 2025-09-27 21:56:24.152796 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:24.152807 | orchestrator | 2025-09-27 21:56:24.152843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:24.152855 | orchestrator | Saturday 27 September 2025 21:56:18 +0000 (0:00:00.168) 0:00:15.689 **** 2025-09-27 21:56:24.152866 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:24.152903 | orchestrator | 2025-09-27 21:56:24.152914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:24.152925 | orchestrator | Saturday 27 September 2025 21:56:18 +0000 (0:00:00.203) 0:00:15.893 **** 2025-09-27 21:56:24.152936 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:24.152946 | orchestrator | 2025-09-27 21:56:24.152957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:24.152968 | orchestrator | Saturday 27 September 2025 21:56:18 +0000 (0:00:00.452) 0:00:16.345 **** 2025-09-27 21:56:24.152978 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:24.152989 | orchestrator | 2025-09-27 21:56:24.153000 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:24.153011 | orchestrator | Saturday 27 September 2025 21:56:18 +0000 (0:00:00.157) 0:00:16.502 **** 2025-09-27 21:56:24.153022 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:24.153032 | orchestrator | 2025-09-27 21:56:24.153061 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:24.153072 | orchestrator | Saturday 27 September 2025 21:56:18 +0000 (0:00:00.147) 0:00:16.650 **** 2025-09-27 21:56:24.153083 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:24.153093 | orchestrator | 2025-09-27 21:56:24.153104 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:24.153114 | orchestrator | Saturday 27 September 2025 21:56:19 +0000 (0:00:00.169) 0:00:16.819 **** 2025-09-27 21:56:24.153125 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43) 2025-09-27 21:56:24.153137 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43) 2025-09-27 21:56:24.153148 | orchestrator | 2025-09-27 21:56:24.153158 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:24.153169 | orchestrator | Saturday 27 September 2025 21:56:19 +0000 (0:00:00.339) 0:00:17.159 **** 2025-09-27 21:56:24.153180 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f54ee983-9faf-4784-aff9-7d79079ed7ae) 2025-09-27 21:56:24.153190 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f54ee983-9faf-4784-aff9-7d79079ed7ae) 2025-09-27 21:56:24.153201 | orchestrator | 2025-09-27 21:56:24.153211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:24.153222 | orchestrator | Saturday 27 September 2025 21:56:19 +0000 (0:00:00.380) 0:00:17.539 **** 2025-09-27 21:56:24.153232 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_270d9e8b-cef6-4542-9e07-9deadafed901) 2025-09-27 21:56:24.153243 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_270d9e8b-cef6-4542-9e07-9deadafed901) 2025-09-27 21:56:24.153253 | orchestrator | 2025-09-27 21:56:24.153264 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:24.153275 | orchestrator | Saturday 27 September 2025 21:56:20 +0000 (0:00:00.413) 0:00:17.953 **** 2025-09-27 21:56:24.153306 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5c98ed57-cbba-4a71-94c9-227184fafc60) 2025-09-27 21:56:24.153318 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5c98ed57-cbba-4a71-94c9-227184fafc60) 2025-09-27 21:56:24.153328 | orchestrator | 2025-09-27 21:56:24.153339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:24.153350 | orchestrator | Saturday 27 September 2025 21:56:20 +0000 (0:00:00.392) 0:00:18.346 **** 2025-09-27 21:56:24.153360 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-27 21:56:24.153371 | orchestrator | 2025-09-27 21:56:24.153382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:24.153397 | orchestrator | Saturday 27 September 2025 21:56:20 +0000 (0:00:00.306) 0:00:18.652 **** 2025-09-27 21:56:24.153416 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-27 21:56:24.153449 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-27 21:56:24.153467 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-27 21:56:24.153485 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-27 21:56:24.153503 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-27 21:56:24.153521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-27 21:56:24.153539 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-27 21:56:24.153559 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-27 21:56:24.153578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-27 21:56:24.153597 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-27 21:56:24.153615 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-27 21:56:24.153634 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-27 21:56:24.153653 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-27 21:56:24.153671 | orchestrator | 2025-09-27 21:56:24.153688 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:24.153699 | orchestrator | Saturday 27 September 2025 21:56:21 +0000 (0:00:00.336) 0:00:18.989 **** 2025-09-27 21:56:24.153710 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:24.153721 | orchestrator | 2025-09-27 21:56:24.153732 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:24.153742 | orchestrator | Saturday 27 September 2025 21:56:21 +0000 (0:00:00.183) 0:00:19.173 **** 2025-09-27 21:56:24.153753 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:24.153764 | orchestrator | 2025-09-27 21:56:24.153775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:24.153785 | orchestrator | Saturday 27 September 2025 21:56:21 +0000 (0:00:00.487) 0:00:19.660 **** 2025-09-27 21:56:24.153806 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:24.153880 | orchestrator | 2025-09-27 21:56:24.153895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:24.153970 | orchestrator | Saturday 27 September 2025 21:56:22 +0000 (0:00:00.206) 0:00:19.867 **** 2025-09-27 21:56:24.153983 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:24.153994 | orchestrator | 2025-09-27 21:56:24.154005 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:24.154067 | orchestrator | Saturday 27 September 2025 21:56:22 +0000 (0:00:00.174) 0:00:20.041 **** 2025-09-27 21:56:24.154079 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:24.154090 | orchestrator | 2025-09-27 21:56:24.154100 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:24.154111 | orchestrator | Saturday 27 September 2025 21:56:22 +0000 (0:00:00.243) 0:00:20.285 **** 2025-09-27 21:56:24.154122 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:24.154132 | orchestrator | 2025-09-27 21:56:24.154143 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:24.154153 | orchestrator | Saturday 27 September 2025 21:56:22 +0000 (0:00:00.182) 0:00:20.467 **** 2025-09-27 21:56:24.154164 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:24.154210 | orchestrator | 2025-09-27 21:56:24.154223 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:24.154234 | orchestrator | Saturday 27 September 2025 21:56:22 +0000 (0:00:00.178) 0:00:20.646 **** 2025-09-27 21:56:24.154244 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:24.154255 | orchestrator | 2025-09-27 21:56:24.154265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:24.154287 | orchestrator | Saturday 27 September 2025 21:56:23 +0000 (0:00:00.191) 0:00:20.837 **** 2025-09-27 21:56:24.154298 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-27 21:56:24.154310 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-27 21:56:24.154321 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-27 21:56:24.154332 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-27 21:56:24.154342 | orchestrator | 2025-09-27 21:56:24.154353 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:24.154364 | orchestrator | Saturday 27 September 2025 21:56:23 +0000 (0:00:00.798) 0:00:21.636 **** 2025-09-27 21:56:24.154374 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:24.154385 | orchestrator | 2025-09-27 21:56:24.154416 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:30.611885 | orchestrator | Saturday 27 September 2025 21:56:24 +0000 (0:00:00.181) 0:00:21.817 **** 2025-09-27 21:56:30.612000 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:30.612017 | orchestrator | 2025-09-27 21:56:30.612030 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:30.612042 | orchestrator | Saturday 27 September 2025 21:56:24 +0000 (0:00:00.175) 0:00:21.993 **** 2025-09-27 21:56:30.612053 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:30.612064 | orchestrator | 2025-09-27 21:56:30.612075 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:30.612086 | orchestrator | Saturday 27 September 2025 21:56:24 +0000 (0:00:00.199) 0:00:22.192 **** 2025-09-27 21:56:30.612097 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:30.612108 | orchestrator | 2025-09-27 21:56:30.612118 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-27 21:56:30.612129 | orchestrator | Saturday 27 September 2025 21:56:24 +0000 (0:00:00.179) 0:00:22.372 **** 2025-09-27 21:56:30.612140 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-27 21:56:30.612150 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-27 21:56:30.612161 | orchestrator | 2025-09-27 21:56:30.612172 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-27 21:56:30.612183 | orchestrator | Saturday 27 September 2025 21:56:25 +0000 (0:00:00.303) 0:00:22.675 **** 2025-09-27 21:56:30.612193 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:30.612204 | orchestrator | 2025-09-27 21:56:30.612215 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-27 21:56:30.612226 | orchestrator | Saturday 27 September 2025 21:56:25 +0000 (0:00:00.120) 0:00:22.796 **** 2025-09-27 21:56:30.612237 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:30.612248 | orchestrator | 2025-09-27 21:56:30.612258 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-27 21:56:30.612269 | orchestrator | Saturday 27 September 2025 21:56:25 +0000 (0:00:00.129) 0:00:22.925 **** 2025-09-27 21:56:30.612280 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:30.612290 | orchestrator | 2025-09-27 21:56:30.612301 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-27 21:56:30.612312 | orchestrator | Saturday 27 September 2025 21:56:25 +0000 (0:00:00.126) 0:00:23.052 **** 2025-09-27 21:56:30.612323 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:56:30.612335 | orchestrator | 2025-09-27 21:56:30.612346 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-27 21:56:30.612356 | orchestrator | Saturday 27 September 2025 21:56:25 +0000 (0:00:00.127) 0:00:23.180 **** 2025-09-27 21:56:30.612369 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'be08f40e-52da-5801-960c-910a686d222b'}}) 2025-09-27 21:56:30.612383 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a2801305-6ac8-5a65-9707-7cc055d05458'}}) 2025-09-27 21:56:30.612396 | orchestrator | 2025-09-27 21:56:30.612408 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-27 21:56:30.612447 | orchestrator | Saturday 27 September 2025 21:56:25 +0000 (0:00:00.167) 0:00:23.347 **** 2025-09-27 21:56:30.612460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'be08f40e-52da-5801-960c-910a686d222b'}})  2025-09-27 21:56:30.612474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a2801305-6ac8-5a65-9707-7cc055d05458'}})  2025-09-27 21:56:30.612486 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:30.612498 | orchestrator | 2025-09-27 21:56:30.612510 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-27 21:56:30.612522 | orchestrator | Saturday 27 September 2025 21:56:25 +0000 (0:00:00.141) 0:00:23.489 **** 2025-09-27 21:56:30.612553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'be08f40e-52da-5801-960c-910a686d222b'}})  2025-09-27 21:56:30.612566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a2801305-6ac8-5a65-9707-7cc055d05458'}})  2025-09-27 21:56:30.612578 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:30.612590 | orchestrator | 2025-09-27 21:56:30.612603 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-27 21:56:30.612615 | orchestrator | Saturday 27 September 2025 21:56:25 +0000 (0:00:00.131) 0:00:23.620 **** 2025-09-27 21:56:30.612627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'be08f40e-52da-5801-960c-910a686d222b'}})  2025-09-27 21:56:30.612639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a2801305-6ac8-5a65-9707-7cc055d05458'}})  2025-09-27 21:56:30.612652 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:30.612664 | orchestrator | 2025-09-27 21:56:30.612676 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-27 21:56:30.612688 | orchestrator | Saturday 27 September 2025 21:56:26 +0000 (0:00:00.165) 0:00:23.785 **** 2025-09-27 21:56:30.612700 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:56:30.612712 | orchestrator | 2025-09-27 21:56:30.612724 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-27 21:56:30.612736 | orchestrator | Saturday 27 September 2025 21:56:26 +0000 (0:00:00.154) 0:00:23.939 **** 2025-09-27 21:56:30.612746 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:56:30.612757 | orchestrator | 2025-09-27 21:56:30.612768 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-27 21:56:30.612778 | orchestrator | Saturday 27 September 2025 21:56:26 +0000 (0:00:00.150) 0:00:24.090 **** 2025-09-27 21:56:30.612789 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:30.612800 | orchestrator | 2025-09-27 21:56:30.612880 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-27 21:56:30.612895 | orchestrator | Saturday 27 September 2025 21:56:26 +0000 (0:00:00.158) 0:00:24.248 **** 2025-09-27 21:56:30.612906 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:30.612916 | orchestrator | 2025-09-27 21:56:30.612927 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-27 21:56:30.612938 | orchestrator | Saturday 27 September 2025 21:56:26 +0000 (0:00:00.328) 0:00:24.577 **** 2025-09-27 21:56:30.612949 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:30.612959 | orchestrator | 2025-09-27 21:56:30.612970 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-27 21:56:30.612980 | orchestrator | Saturday 27 September 2025 21:56:27 +0000 (0:00:00.129) 0:00:24.707 **** 2025-09-27 21:56:30.612991 | orchestrator | ok: [testbed-node-4] => { 2025-09-27 21:56:30.613002 | orchestrator |  "ceph_osd_devices": { 2025-09-27 21:56:30.613012 | orchestrator |  "sdb": { 2025-09-27 21:56:30.613023 | orchestrator |  "osd_lvm_uuid": "be08f40e-52da-5801-960c-910a686d222b" 2025-09-27 21:56:30.613034 | orchestrator |  }, 2025-09-27 21:56:30.613045 | orchestrator |  "sdc": { 2025-09-27 21:56:30.613067 | orchestrator |  "osd_lvm_uuid": "a2801305-6ac8-5a65-9707-7cc055d05458" 2025-09-27 21:56:30.613078 | orchestrator |  } 2025-09-27 21:56:30.613088 | orchestrator |  } 2025-09-27 21:56:30.613099 | orchestrator | } 2025-09-27 21:56:30.613110 | orchestrator | 2025-09-27 21:56:30.613121 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-27 21:56:30.613131 | orchestrator | Saturday 27 September 2025 21:56:27 +0000 (0:00:00.136) 0:00:24.843 **** 2025-09-27 21:56:30.613142 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:30.613153 | orchestrator | 2025-09-27 21:56:30.613163 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-27 21:56:30.613174 | orchestrator | Saturday 27 September 2025 21:56:27 +0000 (0:00:00.123) 0:00:24.967 **** 2025-09-27 21:56:30.613184 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:30.613195 | orchestrator | 2025-09-27 21:56:30.613206 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-27 21:56:30.613216 | orchestrator | Saturday 27 September 2025 21:56:27 +0000 (0:00:00.124) 0:00:25.091 **** 2025-09-27 21:56:30.613227 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:56:30.613237 | orchestrator | 2025-09-27 21:56:30.613248 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-27 21:56:30.613259 | orchestrator | Saturday 27 September 2025 21:56:27 +0000 (0:00:00.332) 0:00:25.424 **** 2025-09-27 21:56:30.613269 | orchestrator | changed: [testbed-node-4] => { 2025-09-27 21:56:30.613280 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-27 21:56:30.613291 | orchestrator |  "ceph_osd_devices": { 2025-09-27 21:56:30.613302 | orchestrator |  "sdb": { 2025-09-27 21:56:30.613313 | orchestrator |  "osd_lvm_uuid": "be08f40e-52da-5801-960c-910a686d222b" 2025-09-27 21:56:30.613323 | orchestrator |  }, 2025-09-27 21:56:30.613334 | orchestrator |  "sdc": { 2025-09-27 21:56:30.613345 | orchestrator |  "osd_lvm_uuid": "a2801305-6ac8-5a65-9707-7cc055d05458" 2025-09-27 21:56:30.613355 | orchestrator |  } 2025-09-27 21:56:30.613366 | orchestrator |  }, 2025-09-27 21:56:30.613377 | orchestrator |  "lvm_volumes": [ 2025-09-27 21:56:30.613387 | orchestrator |  { 2025-09-27 21:56:30.613398 | orchestrator |  "data": "osd-block-be08f40e-52da-5801-960c-910a686d222b", 2025-09-27 21:56:30.613408 | orchestrator |  "data_vg": "ceph-be08f40e-52da-5801-960c-910a686d222b" 2025-09-27 21:56:30.613419 | orchestrator |  }, 2025-09-27 21:56:30.613430 | orchestrator |  { 2025-09-27 21:56:30.613440 | orchestrator |  "data": "osd-block-a2801305-6ac8-5a65-9707-7cc055d05458", 2025-09-27 21:56:30.613451 | orchestrator |  "data_vg": "ceph-a2801305-6ac8-5a65-9707-7cc055d05458" 2025-09-27 21:56:30.613461 | orchestrator |  } 2025-09-27 21:56:30.613472 | orchestrator |  ] 2025-09-27 21:56:30.613482 | orchestrator |  } 2025-09-27 21:56:30.613493 | orchestrator | } 2025-09-27 21:56:30.613503 | orchestrator | 2025-09-27 21:56:30.613514 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-27 21:56:30.613525 | orchestrator | Saturday 27 September 2025 21:56:27 +0000 (0:00:00.221) 0:00:25.645 **** 2025-09-27 21:56:30.613536 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-27 21:56:30.613546 | orchestrator | 2025-09-27 21:56:30.613557 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-27 21:56:30.613568 | orchestrator | 2025-09-27 21:56:30.613578 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-27 21:56:30.613589 | orchestrator | Saturday 27 September 2025 21:56:29 +0000 (0:00:01.103) 0:00:26.749 **** 2025-09-27 21:56:30.613599 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-27 21:56:30.613610 | orchestrator | 2025-09-27 21:56:30.613621 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-27 21:56:30.613631 | orchestrator | Saturday 27 September 2025 21:56:29 +0000 (0:00:00.488) 0:00:27.238 **** 2025-09-27 21:56:30.613649 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:56:30.613659 | orchestrator | 2025-09-27 21:56:30.613670 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:30.613681 | orchestrator | Saturday 27 September 2025 21:56:30 +0000 (0:00:00.652) 0:00:27.890 **** 2025-09-27 21:56:30.613699 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-27 21:56:30.613710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-27 21:56:30.613721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-27 21:56:30.613731 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-27 21:56:30.613742 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-27 21:56:30.613753 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-27 21:56:30.613769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-27 21:56:37.626090 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-27 21:56:37.626191 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-27 21:56:37.626202 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-27 21:56:37.626212 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-27 21:56:37.626220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-27 21:56:37.626228 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-27 21:56:37.626236 | orchestrator | 2025-09-27 21:56:37.626244 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:37.626253 | orchestrator | Saturday 27 September 2025 21:56:30 +0000 (0:00:00.385) 0:00:28.276 **** 2025-09-27 21:56:37.626260 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:37.626268 | orchestrator | 2025-09-27 21:56:37.626276 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:37.626284 | orchestrator | Saturday 27 September 2025 21:56:30 +0000 (0:00:00.177) 0:00:28.454 **** 2025-09-27 21:56:37.626291 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:37.626298 | orchestrator | 2025-09-27 21:56:37.626305 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:37.626313 | orchestrator | Saturday 27 September 2025 21:56:30 +0000 (0:00:00.170) 0:00:28.624 **** 2025-09-27 21:56:37.626320 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:37.626327 | orchestrator | 2025-09-27 21:56:37.626334 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:37.626342 | orchestrator | Saturday 27 September 2025 21:56:31 +0000 (0:00:00.233) 0:00:28.858 **** 2025-09-27 21:56:37.626349 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:37.626356 | orchestrator | 2025-09-27 21:56:37.626363 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:37.626371 | orchestrator | Saturday 27 September 2025 21:56:31 +0000 (0:00:00.229) 0:00:29.088 **** 2025-09-27 21:56:37.626378 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:37.626385 | orchestrator | 2025-09-27 21:56:37.626392 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:37.626400 | orchestrator | Saturday 27 September 2025 21:56:31 +0000 (0:00:00.196) 0:00:29.285 **** 2025-09-27 21:56:37.626407 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:37.626414 | orchestrator | 2025-09-27 21:56:37.626421 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:37.626429 | orchestrator | Saturday 27 September 2025 21:56:31 +0000 (0:00:00.167) 0:00:29.452 **** 2025-09-27 21:56:37.626436 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:37.626462 | orchestrator | 2025-09-27 21:56:37.626470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:37.626477 | orchestrator | Saturday 27 September 2025 21:56:31 +0000 (0:00:00.178) 0:00:29.631 **** 2025-09-27 21:56:37.626485 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:37.626492 | orchestrator | 2025-09-27 21:56:37.626499 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:37.626506 | orchestrator | Saturday 27 September 2025 21:56:32 +0000 (0:00:00.188) 0:00:29.819 **** 2025-09-27 21:56:37.626514 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187) 2025-09-27 21:56:37.626522 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187) 2025-09-27 21:56:37.626530 | orchestrator | 2025-09-27 21:56:37.626537 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:37.626544 | orchestrator | Saturday 27 September 2025 21:56:32 +0000 (0:00:00.555) 0:00:30.375 **** 2025-09-27 21:56:37.626551 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c35b6dae-9fd6-477e-b9cb-11e140c89f55) 2025-09-27 21:56:37.626559 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c35b6dae-9fd6-477e-b9cb-11e140c89f55) 2025-09-27 21:56:37.626566 | orchestrator | 2025-09-27 21:56:37.626573 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:37.626580 | orchestrator | Saturday 27 September 2025 21:56:33 +0000 (0:00:00.637) 0:00:31.013 **** 2025-09-27 21:56:37.626587 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_347ca9a0-83dc-4ac7-930f-213626cd3e96) 2025-09-27 21:56:37.626595 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_347ca9a0-83dc-4ac7-930f-213626cd3e96) 2025-09-27 21:56:37.626602 | orchestrator | 2025-09-27 21:56:37.626611 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:37.626619 | orchestrator | Saturday 27 September 2025 21:56:33 +0000 (0:00:00.310) 0:00:31.324 **** 2025-09-27 21:56:37.626627 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6ce21c34-3cf8-4892-a084-795bd672264f) 2025-09-27 21:56:37.626636 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6ce21c34-3cf8-4892-a084-795bd672264f) 2025-09-27 21:56:37.626645 | orchestrator | 2025-09-27 21:56:37.626653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:56:37.626662 | orchestrator | Saturday 27 September 2025 21:56:33 +0000 (0:00:00.341) 0:00:31.665 **** 2025-09-27 21:56:37.626671 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-27 21:56:37.626679 | orchestrator | 2025-09-27 21:56:37.626688 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:37.626696 | orchestrator | Saturday 27 September 2025 21:56:34 +0000 (0:00:00.243) 0:00:31.909 **** 2025-09-27 21:56:37.626718 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-27 21:56:37.626727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-27 21:56:37.626736 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-27 21:56:37.626744 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-27 21:56:37.626752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-27 21:56:37.626761 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-27 21:56:37.626769 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-27 21:56:37.626777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-27 21:56:37.626786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-27 21:56:37.626816 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-27 21:56:37.626824 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-27 21:56:37.626832 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-27 21:56:37.626863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-27 21:56:37.626871 | orchestrator | 2025-09-27 21:56:37.626879 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:37.626888 | orchestrator | Saturday 27 September 2025 21:56:34 +0000 (0:00:00.359) 0:00:32.269 **** 2025-09-27 21:56:37.626896 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:37.626904 | orchestrator | 2025-09-27 21:56:37.626912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:37.626920 | orchestrator | Saturday 27 September 2025 21:56:34 +0000 (0:00:00.196) 0:00:32.465 **** 2025-09-27 21:56:37.626928 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:37.626936 | orchestrator | 2025-09-27 21:56:37.626944 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:37.626952 | orchestrator | Saturday 27 September 2025 21:56:34 +0000 (0:00:00.181) 0:00:32.647 **** 2025-09-27 21:56:37.626960 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:37.626968 | orchestrator | 2025-09-27 21:56:37.626980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:37.626988 | orchestrator | Saturday 27 September 2025 21:56:35 +0000 (0:00:00.215) 0:00:32.862 **** 2025-09-27 21:56:37.626995 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:37.627002 | orchestrator | 2025-09-27 21:56:37.627009 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:37.627016 | orchestrator | Saturday 27 September 2025 21:56:35 +0000 (0:00:00.166) 0:00:33.029 **** 2025-09-27 21:56:37.627023 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:37.627030 | orchestrator | 2025-09-27 21:56:37.627037 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:37.627044 | orchestrator | Saturday 27 September 2025 21:56:35 +0000 (0:00:00.185) 0:00:33.214 **** 2025-09-27 21:56:37.627051 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:37.627058 | orchestrator | 2025-09-27 21:56:37.627065 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:37.627072 | orchestrator | Saturday 27 September 2025 21:56:36 +0000 (0:00:00.488) 0:00:33.703 **** 2025-09-27 21:56:37.627079 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:37.627086 | orchestrator | 2025-09-27 21:56:37.627093 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:37.627100 | orchestrator | Saturday 27 September 2025 21:56:36 +0000 (0:00:00.194) 0:00:33.897 **** 2025-09-27 21:56:37.627107 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:37.627114 | orchestrator | 2025-09-27 21:56:37.627121 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:37.627128 | orchestrator | Saturday 27 September 2025 21:56:36 +0000 (0:00:00.160) 0:00:34.058 **** 2025-09-27 21:56:37.627135 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-27 21:56:37.627142 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-27 21:56:37.627150 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-27 21:56:37.627157 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-27 21:56:37.627164 | orchestrator | 2025-09-27 21:56:37.627171 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:37.627178 | orchestrator | Saturday 27 September 2025 21:56:36 +0000 (0:00:00.508) 0:00:34.566 **** 2025-09-27 21:56:37.627185 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:37.627192 | orchestrator | 2025-09-27 21:56:37.627200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:37.627212 | orchestrator | Saturday 27 September 2025 21:56:37 +0000 (0:00:00.185) 0:00:34.752 **** 2025-09-27 21:56:37.627219 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:37.627226 | orchestrator | 2025-09-27 21:56:37.627233 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:37.627240 | orchestrator | Saturday 27 September 2025 21:56:37 +0000 (0:00:00.184) 0:00:34.937 **** 2025-09-27 21:56:37.627247 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:37.627254 | orchestrator | 2025-09-27 21:56:37.627261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:56:37.627268 | orchestrator | Saturday 27 September 2025 21:56:37 +0000 (0:00:00.183) 0:00:35.120 **** 2025-09-27 21:56:37.627275 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:37.627282 | orchestrator | 2025-09-27 21:56:37.627289 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-27 21:56:37.627301 | orchestrator | Saturday 27 September 2025 21:56:37 +0000 (0:00:00.168) 0:00:35.288 **** 2025-09-27 21:56:41.270286 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-27 21:56:41.270391 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-27 21:56:41.270407 | orchestrator | 2025-09-27 21:56:41.270420 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-27 21:56:41.270431 | orchestrator | Saturday 27 September 2025 21:56:37 +0000 (0:00:00.153) 0:00:35.442 **** 2025-09-27 21:56:41.270442 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:41.270454 | orchestrator | 2025-09-27 21:56:41.270465 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-27 21:56:41.270476 | orchestrator | Saturday 27 September 2025 21:56:37 +0000 (0:00:00.126) 0:00:35.568 **** 2025-09-27 21:56:41.270486 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:41.270497 | orchestrator | 2025-09-27 21:56:41.270508 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-27 21:56:41.270518 | orchestrator | Saturday 27 September 2025 21:56:38 +0000 (0:00:00.116) 0:00:35.684 **** 2025-09-27 21:56:41.270529 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:41.270540 | orchestrator | 2025-09-27 21:56:41.270550 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-27 21:56:41.270561 | orchestrator | Saturday 27 September 2025 21:56:38 +0000 (0:00:00.129) 0:00:35.814 **** 2025-09-27 21:56:41.270572 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:56:41.270583 | orchestrator | 2025-09-27 21:56:41.270594 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-27 21:56:41.270605 | orchestrator | Saturday 27 September 2025 21:56:38 +0000 (0:00:00.269) 0:00:36.084 **** 2025-09-27 21:56:41.270617 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2625e84f-b704-594b-a79a-2de5db7d7d7c'}}) 2025-09-27 21:56:41.270629 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '30a62591-9a6e-5933-8bc7-7c2bee7235f5'}}) 2025-09-27 21:56:41.270639 | orchestrator | 2025-09-27 21:56:41.270650 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-27 21:56:41.270661 | orchestrator | Saturday 27 September 2025 21:56:38 +0000 (0:00:00.151) 0:00:36.235 **** 2025-09-27 21:56:41.270672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2625e84f-b704-594b-a79a-2de5db7d7d7c'}})  2025-09-27 21:56:41.270685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '30a62591-9a6e-5933-8bc7-7c2bee7235f5'}})  2025-09-27 21:56:41.270696 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:41.270707 | orchestrator | 2025-09-27 21:56:41.270717 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-27 21:56:41.270728 | orchestrator | Saturday 27 September 2025 21:56:38 +0000 (0:00:00.120) 0:00:36.356 **** 2025-09-27 21:56:41.270739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2625e84f-b704-594b-a79a-2de5db7d7d7c'}})  2025-09-27 21:56:41.270778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '30a62591-9a6e-5933-8bc7-7c2bee7235f5'}})  2025-09-27 21:56:41.270790 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:41.270815 | orchestrator | 2025-09-27 21:56:41.270827 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-27 21:56:41.270839 | orchestrator | Saturday 27 September 2025 21:56:38 +0000 (0:00:00.139) 0:00:36.495 **** 2025-09-27 21:56:41.270876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2625e84f-b704-594b-a79a-2de5db7d7d7c'}})  2025-09-27 21:56:41.270888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '30a62591-9a6e-5933-8bc7-7c2bee7235f5'}})  2025-09-27 21:56:41.270901 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:41.270912 | orchestrator | 2025-09-27 21:56:41.270925 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-27 21:56:41.270937 | orchestrator | Saturday 27 September 2025 21:56:38 +0000 (0:00:00.147) 0:00:36.643 **** 2025-09-27 21:56:41.270949 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:56:41.270961 | orchestrator | 2025-09-27 21:56:41.270990 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-27 21:56:41.271003 | orchestrator | Saturday 27 September 2025 21:56:39 +0000 (0:00:00.129) 0:00:36.772 **** 2025-09-27 21:56:41.271015 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:56:41.271027 | orchestrator | 2025-09-27 21:56:41.271039 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-27 21:56:41.271051 | orchestrator | Saturday 27 September 2025 21:56:39 +0000 (0:00:00.131) 0:00:36.903 **** 2025-09-27 21:56:41.271063 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:41.271075 | orchestrator | 2025-09-27 21:56:41.271088 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-27 21:56:41.271100 | orchestrator | Saturday 27 September 2025 21:56:39 +0000 (0:00:00.116) 0:00:37.020 **** 2025-09-27 21:56:41.271112 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:41.271123 | orchestrator | 2025-09-27 21:56:41.271135 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-27 21:56:41.271147 | orchestrator | Saturday 27 September 2025 21:56:39 +0000 (0:00:00.119) 0:00:37.139 **** 2025-09-27 21:56:41.271160 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:41.271172 | orchestrator | 2025-09-27 21:56:41.271183 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-27 21:56:41.271194 | orchestrator | Saturday 27 September 2025 21:56:39 +0000 (0:00:00.122) 0:00:37.262 **** 2025-09-27 21:56:41.271205 | orchestrator | ok: [testbed-node-5] => { 2025-09-27 21:56:41.271216 | orchestrator |  "ceph_osd_devices": { 2025-09-27 21:56:41.271227 | orchestrator |  "sdb": { 2025-09-27 21:56:41.271238 | orchestrator |  "osd_lvm_uuid": "2625e84f-b704-594b-a79a-2de5db7d7d7c" 2025-09-27 21:56:41.271266 | orchestrator |  }, 2025-09-27 21:56:41.271278 | orchestrator |  "sdc": { 2025-09-27 21:56:41.271289 | orchestrator |  "osd_lvm_uuid": "30a62591-9a6e-5933-8bc7-7c2bee7235f5" 2025-09-27 21:56:41.271300 | orchestrator |  } 2025-09-27 21:56:41.271311 | orchestrator |  } 2025-09-27 21:56:41.271322 | orchestrator | } 2025-09-27 21:56:41.271334 | orchestrator | 2025-09-27 21:56:41.271345 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-27 21:56:41.271355 | orchestrator | Saturday 27 September 2025 21:56:39 +0000 (0:00:00.132) 0:00:37.394 **** 2025-09-27 21:56:41.271366 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:41.271377 | orchestrator | 2025-09-27 21:56:41.271489 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-27 21:56:41.271504 | orchestrator | Saturday 27 September 2025 21:56:39 +0000 (0:00:00.120) 0:00:37.514 **** 2025-09-27 21:56:41.271514 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:41.271525 | orchestrator | 2025-09-27 21:56:41.271536 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-27 21:56:41.271631 | orchestrator | Saturday 27 September 2025 21:56:40 +0000 (0:00:00.264) 0:00:37.778 **** 2025-09-27 21:56:41.271643 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:56:41.271655 | orchestrator | 2025-09-27 21:56:41.271666 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-27 21:56:41.271676 | orchestrator | Saturday 27 September 2025 21:56:40 +0000 (0:00:00.127) 0:00:37.906 **** 2025-09-27 21:56:41.271687 | orchestrator | changed: [testbed-node-5] => { 2025-09-27 21:56:41.271698 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-27 21:56:41.271709 | orchestrator |  "ceph_osd_devices": { 2025-09-27 21:56:41.271720 | orchestrator |  "sdb": { 2025-09-27 21:56:41.271731 | orchestrator |  "osd_lvm_uuid": "2625e84f-b704-594b-a79a-2de5db7d7d7c" 2025-09-27 21:56:41.271742 | orchestrator |  }, 2025-09-27 21:56:41.271753 | orchestrator |  "sdc": { 2025-09-27 21:56:41.271763 | orchestrator |  "osd_lvm_uuid": "30a62591-9a6e-5933-8bc7-7c2bee7235f5" 2025-09-27 21:56:41.271774 | orchestrator |  } 2025-09-27 21:56:41.271785 | orchestrator |  }, 2025-09-27 21:56:41.271796 | orchestrator |  "lvm_volumes": [ 2025-09-27 21:56:41.271806 | orchestrator |  { 2025-09-27 21:56:41.271817 | orchestrator |  "data": "osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c", 2025-09-27 21:56:41.271828 | orchestrator |  "data_vg": "ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c" 2025-09-27 21:56:41.271838 | orchestrator |  }, 2025-09-27 21:56:41.271875 | orchestrator |  { 2025-09-27 21:56:41.271887 | orchestrator |  "data": "osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5", 2025-09-27 21:56:41.271898 | orchestrator |  "data_vg": "ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5" 2025-09-27 21:56:41.271909 | orchestrator |  } 2025-09-27 21:56:41.271920 | orchestrator |  ] 2025-09-27 21:56:41.271931 | orchestrator |  } 2025-09-27 21:56:41.271946 | orchestrator | } 2025-09-27 21:56:41.271957 | orchestrator | 2025-09-27 21:56:41.271968 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-27 21:56:41.271979 | orchestrator | Saturday 27 September 2025 21:56:40 +0000 (0:00:00.191) 0:00:38.097 **** 2025-09-27 21:56:41.271990 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-27 21:56:41.272001 | orchestrator | 2025-09-27 21:56:41.272012 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:56:41.272023 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-27 21:56:41.272034 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-27 21:56:41.272045 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-27 21:56:41.272056 | orchestrator | 2025-09-27 21:56:41.272067 | orchestrator | 2025-09-27 21:56:41.272078 | orchestrator | 2025-09-27 21:56:41.272088 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:56:41.272099 | orchestrator | Saturday 27 September 2025 21:56:41 +0000 (0:00:00.825) 0:00:38.923 **** 2025-09-27 21:56:41.272110 | orchestrator | =============================================================================== 2025-09-27 21:56:41.272121 | orchestrator | Write configuration file ------------------------------------------------ 3.50s 2025-09-27 21:56:41.272132 | orchestrator | Add known partitions to the list of available block devices ------------- 1.09s 2025-09-27 21:56:41.272142 | orchestrator | Add known links to the list of available block devices ------------------ 1.08s 2025-09-27 21:56:41.272153 | orchestrator | Get initial list of available block devices ----------------------------- 1.08s 2025-09-27 21:56:41.272164 | orchestrator | Add known partitions to the list of available block devices ------------- 1.01s 2025-09-27 21:56:41.272182 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.95s 2025-09-27 21:56:41.272193 | orchestrator | Print configuration data ------------------------------------------------ 0.82s 2025-09-27 21:56:41.272204 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2025-09-27 21:56:41.272215 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2025-09-27 21:56:41.272225 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-09-27 21:56:41.272236 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-09-27 21:56:41.272246 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.63s 2025-09-27 21:56:41.272257 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.62s 2025-09-27 21:56:41.272268 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-09-27 21:56:41.272288 | orchestrator | Print shared DB/WAL devices --------------------------------------------- 0.60s 2025-09-27 21:56:41.462898 | orchestrator | Set WAL devices config data --------------------------------------------- 0.58s 2025-09-27 21:56:41.462992 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2025-09-27 21:56:41.463004 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.53s 2025-09-27 21:56:41.463015 | orchestrator | Print DB devices -------------------------------------------------------- 0.52s 2025-09-27 21:56:41.463027 | orchestrator | Add known partitions to the list of available block devices ------------- 0.51s 2025-09-27 21:57:03.818707 | orchestrator | 2025-09-27 21:57:03 | INFO  | Task 41d9d2e8-edc5-417a-9953-64996df44699 (sync inventory) is running in background. Output coming soon. 2025-09-27 21:57:27.813220 | orchestrator | 2025-09-27 21:57:05 | INFO  | Starting group_vars file reorganization 2025-09-27 21:57:27.813363 | orchestrator | 2025-09-27 21:57:05 | INFO  | Moved 0 file(s) to their respective directories 2025-09-27 21:57:27.813381 | orchestrator | 2025-09-27 21:57:05 | INFO  | Group_vars file reorganization completed 2025-09-27 21:57:27.813393 | orchestrator | 2025-09-27 21:57:07 | INFO  | Starting variable preparation from inventory 2025-09-27 21:57:27.813404 | orchestrator | 2025-09-27 21:57:10 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-27 21:57:27.813416 | orchestrator | 2025-09-27 21:57:10 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-27 21:57:27.814184 | orchestrator | 2025-09-27 21:57:10 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-27 21:57:27.814204 | orchestrator | 2025-09-27 21:57:10 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-27 21:57:27.814217 | orchestrator | 2025-09-27 21:57:10 | INFO  | Variable preparation completed 2025-09-27 21:57:27.814231 | orchestrator | 2025-09-27 21:57:11 | INFO  | Starting inventory overwrite handling 2025-09-27 21:57:27.814242 | orchestrator | 2025-09-27 21:57:11 | INFO  | Handling group overwrites in 99-overwrite 2025-09-27 21:57:27.814278 | orchestrator | 2025-09-27 21:57:11 | INFO  | Removing group frr:children from 60-generic 2025-09-27 21:57:27.814290 | orchestrator | 2025-09-27 21:57:11 | INFO  | Removing group storage:children from 50-kolla 2025-09-27 21:57:27.814301 | orchestrator | 2025-09-27 21:57:11 | INFO  | Removing group netbird:children from 50-infrastructure 2025-09-27 21:57:27.814312 | orchestrator | 2025-09-27 21:57:11 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-27 21:57:27.814324 | orchestrator | 2025-09-27 21:57:11 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-27 21:57:27.814334 | orchestrator | 2025-09-27 21:57:11 | INFO  | Handling group overwrites in 20-roles 2025-09-27 21:57:27.814345 | orchestrator | 2025-09-27 21:57:11 | INFO  | Removing group k3s_node from 50-infrastructure 2025-09-27 21:57:27.814382 | orchestrator | 2025-09-27 21:57:11 | INFO  | Removed 6 group(s) in total 2025-09-27 21:57:27.814393 | orchestrator | 2025-09-27 21:57:11 | INFO  | Inventory overwrite handling completed 2025-09-27 21:57:27.814404 | orchestrator | 2025-09-27 21:57:12 | INFO  | Starting merge of inventory files 2025-09-27 21:57:27.814415 | orchestrator | 2025-09-27 21:57:12 | INFO  | Inventory files merged successfully 2025-09-27 21:57:27.814425 | orchestrator | 2025-09-27 21:57:16 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-27 21:57:27.814436 | orchestrator | 2025-09-27 21:57:26 | INFO  | Successfully wrote ClusterShell configuration 2025-09-27 21:57:27.814447 | orchestrator | [master f5e7a6d] 2025-09-27-21-57 2025-09-27 21:57:27.814459 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-27 21:57:30.029744 | orchestrator | 2025-09-27 21:57:30 | INFO  | Task 2f62f3d6-fc06-4086-8124-1f596632b1f3 (ceph-create-lvm-devices) was prepared for execution. 2025-09-27 21:57:30.029851 | orchestrator | 2025-09-27 21:57:30 | INFO  | It takes a moment until task 2f62f3d6-fc06-4086-8124-1f596632b1f3 (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-27 21:57:43.301150 | orchestrator | 2025-09-27 21:57:43.301227 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-27 21:57:43.301235 | orchestrator | 2025-09-27 21:57:43.301241 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-27 21:57:43.301247 | orchestrator | Saturday 27 September 2025 21:57:35 +0000 (0:00:00.348) 0:00:00.348 **** 2025-09-27 21:57:43.301252 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-27 21:57:43.301258 | orchestrator | 2025-09-27 21:57:43.301263 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-27 21:57:43.301268 | orchestrator | Saturday 27 September 2025 21:57:35 +0000 (0:00:00.274) 0:00:00.622 **** 2025-09-27 21:57:43.301273 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:57:43.301279 | orchestrator | 2025-09-27 21:57:43.301283 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:57:43.301295 | orchestrator | Saturday 27 September 2025 21:57:35 +0000 (0:00:00.214) 0:00:00.836 **** 2025-09-27 21:57:43.301301 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-27 21:57:43.301307 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-27 21:57:43.301312 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-27 21:57:43.301317 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-27 21:57:43.301322 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-27 21:57:43.301327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-27 21:57:43.301331 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-27 21:57:43.301336 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-27 21:57:43.301341 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-27 21:57:43.301346 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-27 21:57:43.301351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-27 21:57:43.301355 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-27 21:57:43.301360 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-27 21:57:43.301365 | orchestrator | 2025-09-27 21:57:43.301370 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:57:43.301392 | orchestrator | Saturday 27 September 2025 21:57:36 +0000 (0:00:00.414) 0:00:01.251 **** 2025-09-27 21:57:43.301397 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:43.301402 | orchestrator | 2025-09-27 21:57:43.301406 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:57:43.301411 | orchestrator | Saturday 27 September 2025 21:57:36 +0000 (0:00:00.432) 0:00:01.683 **** 2025-09-27 21:57:43.301416 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:43.301421 | orchestrator | 2025-09-27 21:57:43.301425 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:57:43.301457 | orchestrator | Saturday 27 September 2025 21:57:36 +0000 (0:00:00.196) 0:00:01.880 **** 2025-09-27 21:57:43.301464 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:43.301468 | orchestrator | 2025-09-27 21:57:43.301473 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:57:43.301478 | orchestrator | Saturday 27 September 2025 21:57:36 +0000 (0:00:00.190) 0:00:02.071 **** 2025-09-27 21:57:43.301483 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:43.301488 | orchestrator | 2025-09-27 21:57:43.301493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:57:43.301497 | orchestrator | Saturday 27 September 2025 21:57:37 +0000 (0:00:00.185) 0:00:02.256 **** 2025-09-27 21:57:43.301502 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:43.301507 | orchestrator | 2025-09-27 21:57:43.301511 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:57:43.301516 | orchestrator | Saturday 27 September 2025 21:57:37 +0000 (0:00:00.207) 0:00:02.464 **** 2025-09-27 21:57:43.301556 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:43.301561 | orchestrator | 2025-09-27 21:57:43.301566 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:57:43.301571 | orchestrator | Saturday 27 September 2025 21:57:37 +0000 (0:00:00.205) 0:00:02.669 **** 2025-09-27 21:57:43.301576 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:43.301580 | orchestrator | 2025-09-27 21:57:43.301585 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:57:43.301590 | orchestrator | Saturday 27 September 2025 21:57:37 +0000 (0:00:00.217) 0:00:02.887 **** 2025-09-27 21:57:43.301595 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:43.301599 | orchestrator | 2025-09-27 21:57:43.301604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:57:43.301609 | orchestrator | Saturday 27 September 2025 21:57:37 +0000 (0:00:00.226) 0:00:03.113 **** 2025-09-27 21:57:43.301614 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124) 2025-09-27 21:57:43.301620 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124) 2025-09-27 21:57:43.301624 | orchestrator | 2025-09-27 21:57:43.301629 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:57:43.301634 | orchestrator | Saturday 27 September 2025 21:57:38 +0000 (0:00:00.545) 0:00:03.659 **** 2025-09-27 21:57:43.301666 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d6e45664-99ef-4d09-8a38-5c0568f04129) 2025-09-27 21:57:43.301673 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d6e45664-99ef-4d09-8a38-5c0568f04129) 2025-09-27 21:57:43.301678 | orchestrator | 2025-09-27 21:57:43.301682 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:57:43.301687 | orchestrator | Saturday 27 September 2025 21:57:38 +0000 (0:00:00.417) 0:00:04.076 **** 2025-09-27 21:57:43.301692 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_02398e45-2b37-4a9b-beeb-c269fa72e24d) 2025-09-27 21:57:43.301697 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_02398e45-2b37-4a9b-beeb-c269fa72e24d) 2025-09-27 21:57:43.301702 | orchestrator | 2025-09-27 21:57:43.301708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:57:43.301718 | orchestrator | Saturday 27 September 2025 21:57:39 +0000 (0:00:00.617) 0:00:04.693 **** 2025-09-27 21:57:43.301723 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c7c2c329-81fb-49e1-8405-12e2c9115bb9) 2025-09-27 21:57:43.301729 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c7c2c329-81fb-49e1-8405-12e2c9115bb9) 2025-09-27 21:57:43.301734 | orchestrator | 2025-09-27 21:57:43.301740 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:57:43.301745 | orchestrator | Saturday 27 September 2025 21:57:40 +0000 (0:00:00.637) 0:00:05.331 **** 2025-09-27 21:57:43.301750 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-27 21:57:43.301756 | orchestrator | 2025-09-27 21:57:43.301796 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:57:43.301851 | orchestrator | Saturday 27 September 2025 21:57:41 +0000 (0:00:00.934) 0:00:06.266 **** 2025-09-27 21:57:43.301857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-27 21:57:43.301884 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-27 21:57:43.301891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-27 21:57:43.301897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-27 21:57:43.301902 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-27 21:57:43.301961 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-27 21:57:43.301967 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-27 21:57:43.301973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-27 21:57:43.301992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-27 21:57:43.301998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-27 21:57:43.302003 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-27 21:57:43.302008 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-27 21:57:43.302067 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-27 21:57:43.302073 | orchestrator | 2025-09-27 21:57:43.302079 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:57:43.302084 | orchestrator | Saturday 27 September 2025 21:57:41 +0000 (0:00:00.434) 0:00:06.700 **** 2025-09-27 21:57:43.302090 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:43.302095 | orchestrator | 2025-09-27 21:57:43.302100 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:57:43.302105 | orchestrator | Saturday 27 September 2025 21:57:41 +0000 (0:00:00.228) 0:00:06.929 **** 2025-09-27 21:57:43.302110 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:43.302114 | orchestrator | 2025-09-27 21:57:43.302119 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:57:43.302124 | orchestrator | Saturday 27 September 2025 21:57:41 +0000 (0:00:00.199) 0:00:07.128 **** 2025-09-27 21:57:43.302129 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:43.302134 | orchestrator | 2025-09-27 21:57:43.302138 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:57:43.302143 | orchestrator | Saturday 27 September 2025 21:57:42 +0000 (0:00:00.266) 0:00:07.395 **** 2025-09-27 21:57:43.302148 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:43.302153 | orchestrator | 2025-09-27 21:57:43.302158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:57:43.302168 | orchestrator | Saturday 27 September 2025 21:57:42 +0000 (0:00:00.218) 0:00:07.613 **** 2025-09-27 21:57:43.302172 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:43.302177 | orchestrator | 2025-09-27 21:57:43.302182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:57:43.302187 | orchestrator | Saturday 27 September 2025 21:57:42 +0000 (0:00:00.270) 0:00:07.883 **** 2025-09-27 21:57:43.302192 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:43.302196 | orchestrator | 2025-09-27 21:57:43.302201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:57:43.302206 | orchestrator | Saturday 27 September 2025 21:57:42 +0000 (0:00:00.216) 0:00:08.099 **** 2025-09-27 21:57:43.302210 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:43.302215 | orchestrator | 2025-09-27 21:57:43.302220 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:57:43.302225 | orchestrator | Saturday 27 September 2025 21:57:43 +0000 (0:00:00.219) 0:00:08.319 **** 2025-09-27 21:57:43.302234 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:50.953214 | orchestrator | 2025-09-27 21:57:50.953303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:57:50.953312 | orchestrator | Saturday 27 September 2025 21:57:43 +0000 (0:00:00.221) 0:00:08.541 **** 2025-09-27 21:57:50.953318 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-27 21:57:50.953325 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-27 21:57:50.953330 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-27 21:57:50.953335 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-27 21:57:50.953340 | orchestrator | 2025-09-27 21:57:50.953345 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:57:50.953349 | orchestrator | Saturday 27 September 2025 21:57:44 +0000 (0:00:01.186) 0:00:09.728 **** 2025-09-27 21:57:50.953354 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:50.953359 | orchestrator | 2025-09-27 21:57:50.953364 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:57:50.953368 | orchestrator | Saturday 27 September 2025 21:57:44 +0000 (0:00:00.194) 0:00:09.922 **** 2025-09-27 21:57:50.953373 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:50.953377 | orchestrator | 2025-09-27 21:57:50.953382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:57:50.953386 | orchestrator | Saturday 27 September 2025 21:57:44 +0000 (0:00:00.194) 0:00:10.116 **** 2025-09-27 21:57:50.953391 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:50.953395 | orchestrator | 2025-09-27 21:57:50.953400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:57:50.953405 | orchestrator | Saturday 27 September 2025 21:57:45 +0000 (0:00:00.171) 0:00:10.288 **** 2025-09-27 21:57:50.953409 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:50.953413 | orchestrator | 2025-09-27 21:57:50.953418 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-27 21:57:50.953422 | orchestrator | Saturday 27 September 2025 21:57:45 +0000 (0:00:00.171) 0:00:10.460 **** 2025-09-27 21:57:50.953427 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:50.953431 | orchestrator | 2025-09-27 21:57:50.953436 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-27 21:57:50.953440 | orchestrator | Saturday 27 September 2025 21:57:45 +0000 (0:00:00.107) 0:00:10.567 **** 2025-09-27 21:57:50.953446 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'}}) 2025-09-27 21:57:50.953451 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8d8c80c3-887a-53bd-bc85-16ee8bc68188'}}) 2025-09-27 21:57:50.953456 | orchestrator | 2025-09-27 21:57:50.953460 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-27 21:57:50.953465 | orchestrator | Saturday 27 September 2025 21:57:45 +0000 (0:00:00.164) 0:00:10.731 **** 2025-09-27 21:57:50.953487 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'}) 2025-09-27 21:57:50.953492 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'}) 2025-09-27 21:57:50.953496 | orchestrator | 2025-09-27 21:57:50.953501 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-27 21:57:50.953517 | orchestrator | Saturday 27 September 2025 21:57:47 +0000 (0:00:01.888) 0:00:12.620 **** 2025-09-27 21:57:50.953522 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'})  2025-09-27 21:57:50.953528 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'})  2025-09-27 21:57:50.953532 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:50.953537 | orchestrator | 2025-09-27 21:57:50.953541 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-27 21:57:50.953546 | orchestrator | Saturday 27 September 2025 21:57:47 +0000 (0:00:00.124) 0:00:12.744 **** 2025-09-27 21:57:50.953550 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'}) 2025-09-27 21:57:50.953555 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'}) 2025-09-27 21:57:50.953559 | orchestrator | 2025-09-27 21:57:50.953564 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-27 21:57:50.953568 | orchestrator | Saturday 27 September 2025 21:57:48 +0000 (0:00:01.408) 0:00:14.153 **** 2025-09-27 21:57:50.953573 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'})  2025-09-27 21:57:50.953578 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'})  2025-09-27 21:57:50.953582 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:50.953587 | orchestrator | 2025-09-27 21:57:50.953591 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-27 21:57:50.953596 | orchestrator | Saturday 27 September 2025 21:57:49 +0000 (0:00:00.159) 0:00:14.312 **** 2025-09-27 21:57:50.953600 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:50.953605 | orchestrator | 2025-09-27 21:57:50.953609 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-27 21:57:50.953626 | orchestrator | Saturday 27 September 2025 21:57:49 +0000 (0:00:00.129) 0:00:14.442 **** 2025-09-27 21:57:50.953630 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'})  2025-09-27 21:57:50.953635 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'})  2025-09-27 21:57:50.953640 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:50.953644 | orchestrator | 2025-09-27 21:57:50.953649 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-27 21:57:50.953653 | orchestrator | Saturday 27 September 2025 21:57:49 +0000 (0:00:00.309) 0:00:14.751 **** 2025-09-27 21:57:50.953658 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:50.953662 | orchestrator | 2025-09-27 21:57:50.953667 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-27 21:57:50.953671 | orchestrator | Saturday 27 September 2025 21:57:49 +0000 (0:00:00.140) 0:00:14.892 **** 2025-09-27 21:57:50.953676 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'})  2025-09-27 21:57:50.953685 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'})  2025-09-27 21:57:50.953689 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:50.953694 | orchestrator | 2025-09-27 21:57:50.953699 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-27 21:57:50.953703 | orchestrator | Saturday 27 September 2025 21:57:49 +0000 (0:00:00.150) 0:00:15.043 **** 2025-09-27 21:57:50.953707 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:50.953712 | orchestrator | 2025-09-27 21:57:50.953717 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-27 21:57:50.953721 | orchestrator | Saturday 27 September 2025 21:57:49 +0000 (0:00:00.128) 0:00:15.172 **** 2025-09-27 21:57:50.953726 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'})  2025-09-27 21:57:50.953730 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'})  2025-09-27 21:57:50.953735 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:50.953739 | orchestrator | 2025-09-27 21:57:50.953744 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-27 21:57:50.953748 | orchestrator | Saturday 27 September 2025 21:57:50 +0000 (0:00:00.137) 0:00:15.309 **** 2025-09-27 21:57:50.953753 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:57:50.953758 | orchestrator | 2025-09-27 21:57:50.953762 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-27 21:57:50.953768 | orchestrator | Saturday 27 September 2025 21:57:50 +0000 (0:00:00.136) 0:00:15.446 **** 2025-09-27 21:57:50.953774 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'})  2025-09-27 21:57:50.953779 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'})  2025-09-27 21:57:50.953784 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:50.953790 | orchestrator | 2025-09-27 21:57:50.953795 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-27 21:57:50.953806 | orchestrator | Saturday 27 September 2025 21:57:50 +0000 (0:00:00.139) 0:00:15.585 **** 2025-09-27 21:57:50.953811 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'})  2025-09-27 21:57:50.953816 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'})  2025-09-27 21:57:50.953821 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:50.953827 | orchestrator | 2025-09-27 21:57:50.953832 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-27 21:57:50.953837 | orchestrator | Saturday 27 September 2025 21:57:50 +0000 (0:00:00.187) 0:00:15.772 **** 2025-09-27 21:57:50.953842 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'})  2025-09-27 21:57:50.953847 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'})  2025-09-27 21:57:50.953852 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:50.953857 | orchestrator | 2025-09-27 21:57:50.953862 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-27 21:57:50.953867 | orchestrator | Saturday 27 September 2025 21:57:50 +0000 (0:00:00.151) 0:00:15.924 **** 2025-09-27 21:57:50.953872 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:50.953883 | orchestrator | 2025-09-27 21:57:50.953888 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-27 21:57:50.953893 | orchestrator | Saturday 27 September 2025 21:57:50 +0000 (0:00:00.133) 0:00:16.057 **** 2025-09-27 21:57:50.953898 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:50.953903 | orchestrator | 2025-09-27 21:57:50.953911 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-27 21:57:56.956743 | orchestrator | Saturday 27 September 2025 21:57:50 +0000 (0:00:00.134) 0:00:16.192 **** 2025-09-27 21:57:56.956844 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.956857 | orchestrator | 2025-09-27 21:57:56.956867 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-27 21:57:56.956876 | orchestrator | Saturday 27 September 2025 21:57:51 +0000 (0:00:00.135) 0:00:16.327 **** 2025-09-27 21:57:56.956885 | orchestrator | ok: [testbed-node-3] => { 2025-09-27 21:57:56.956895 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-27 21:57:56.956905 | orchestrator | } 2025-09-27 21:57:56.956914 | orchestrator | 2025-09-27 21:57:56.956923 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-27 21:57:56.956932 | orchestrator | Saturday 27 September 2025 21:57:51 +0000 (0:00:00.311) 0:00:16.639 **** 2025-09-27 21:57:56.956940 | orchestrator | ok: [testbed-node-3] => { 2025-09-27 21:57:56.956949 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-27 21:57:56.956958 | orchestrator | } 2025-09-27 21:57:56.957009 | orchestrator | 2025-09-27 21:57:56.957018 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-27 21:57:56.957027 | orchestrator | Saturday 27 September 2025 21:57:51 +0000 (0:00:00.151) 0:00:16.791 **** 2025-09-27 21:57:56.957036 | orchestrator | ok: [testbed-node-3] => { 2025-09-27 21:57:56.957045 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-27 21:57:56.957054 | orchestrator | } 2025-09-27 21:57:56.957064 | orchestrator | 2025-09-27 21:57:56.957073 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-27 21:57:56.957082 | orchestrator | Saturday 27 September 2025 21:57:51 +0000 (0:00:00.163) 0:00:16.955 **** 2025-09-27 21:57:56.957091 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:57:56.957101 | orchestrator | 2025-09-27 21:57:56.957109 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-27 21:57:56.957118 | orchestrator | Saturday 27 September 2025 21:57:52 +0000 (0:00:00.663) 0:00:17.618 **** 2025-09-27 21:57:56.957127 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:57:56.957135 | orchestrator | 2025-09-27 21:57:56.957144 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-27 21:57:56.957152 | orchestrator | Saturday 27 September 2025 21:57:52 +0000 (0:00:00.528) 0:00:18.147 **** 2025-09-27 21:57:56.957161 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:57:56.957170 | orchestrator | 2025-09-27 21:57:56.957179 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-27 21:57:56.957188 | orchestrator | Saturday 27 September 2025 21:57:53 +0000 (0:00:00.478) 0:00:18.626 **** 2025-09-27 21:57:56.957196 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:57:56.957205 | orchestrator | 2025-09-27 21:57:56.957214 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-27 21:57:56.957222 | orchestrator | Saturday 27 September 2025 21:57:53 +0000 (0:00:00.157) 0:00:18.783 **** 2025-09-27 21:57:56.957231 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.957240 | orchestrator | 2025-09-27 21:57:56.957248 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-27 21:57:56.957257 | orchestrator | Saturday 27 September 2025 21:57:53 +0000 (0:00:00.160) 0:00:18.943 **** 2025-09-27 21:57:56.957266 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.957276 | orchestrator | 2025-09-27 21:57:56.957285 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-27 21:57:56.957295 | orchestrator | Saturday 27 September 2025 21:57:53 +0000 (0:00:00.107) 0:00:19.051 **** 2025-09-27 21:57:56.957327 | orchestrator | ok: [testbed-node-3] => { 2025-09-27 21:57:56.957337 | orchestrator |  "vgs_report": { 2025-09-27 21:57:56.957363 | orchestrator |  "vg": [] 2025-09-27 21:57:56.957373 | orchestrator |  } 2025-09-27 21:57:56.957383 | orchestrator | } 2025-09-27 21:57:56.957393 | orchestrator | 2025-09-27 21:57:56.957403 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-27 21:57:56.957412 | orchestrator | Saturday 27 September 2025 21:57:53 +0000 (0:00:00.124) 0:00:19.175 **** 2025-09-27 21:57:56.957422 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.957432 | orchestrator | 2025-09-27 21:57:56.957441 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-27 21:57:56.957451 | orchestrator | Saturday 27 September 2025 21:57:54 +0000 (0:00:00.131) 0:00:19.306 **** 2025-09-27 21:57:56.957461 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.957470 | orchestrator | 2025-09-27 21:57:56.957480 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-27 21:57:56.957490 | orchestrator | Saturday 27 September 2025 21:57:54 +0000 (0:00:00.123) 0:00:19.430 **** 2025-09-27 21:57:56.957499 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.957509 | orchestrator | 2025-09-27 21:57:56.957518 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-27 21:57:56.957528 | orchestrator | Saturday 27 September 2025 21:57:54 +0000 (0:00:00.253) 0:00:19.684 **** 2025-09-27 21:57:56.957538 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.957547 | orchestrator | 2025-09-27 21:57:56.957556 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-27 21:57:56.957566 | orchestrator | Saturday 27 September 2025 21:57:54 +0000 (0:00:00.118) 0:00:19.803 **** 2025-09-27 21:57:56.957575 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.957585 | orchestrator | 2025-09-27 21:57:56.957595 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-27 21:57:56.957605 | orchestrator | Saturday 27 September 2025 21:57:54 +0000 (0:00:00.131) 0:00:19.935 **** 2025-09-27 21:57:56.957614 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.957624 | orchestrator | 2025-09-27 21:57:56.957634 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-27 21:57:56.957643 | orchestrator | Saturday 27 September 2025 21:57:54 +0000 (0:00:00.121) 0:00:20.057 **** 2025-09-27 21:57:56.957653 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.957662 | orchestrator | 2025-09-27 21:57:56.957671 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-27 21:57:56.957680 | orchestrator | Saturday 27 September 2025 21:57:54 +0000 (0:00:00.132) 0:00:20.189 **** 2025-09-27 21:57:56.957688 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.957697 | orchestrator | 2025-09-27 21:57:56.957706 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-27 21:57:56.957730 | orchestrator | Saturday 27 September 2025 21:57:55 +0000 (0:00:00.123) 0:00:20.312 **** 2025-09-27 21:57:56.957739 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.957748 | orchestrator | 2025-09-27 21:57:56.957757 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-27 21:57:56.957765 | orchestrator | Saturday 27 September 2025 21:57:55 +0000 (0:00:00.120) 0:00:20.432 **** 2025-09-27 21:57:56.957773 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.957782 | orchestrator | 2025-09-27 21:57:56.957790 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-27 21:57:56.957799 | orchestrator | Saturday 27 September 2025 21:57:55 +0000 (0:00:00.116) 0:00:20.549 **** 2025-09-27 21:57:56.957807 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.957816 | orchestrator | 2025-09-27 21:57:56.957824 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-27 21:57:56.957833 | orchestrator | Saturday 27 September 2025 21:57:55 +0000 (0:00:00.148) 0:00:20.697 **** 2025-09-27 21:57:56.957841 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.957850 | orchestrator | 2025-09-27 21:57:56.957888 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-27 21:57:56.957897 | orchestrator | Saturday 27 September 2025 21:57:55 +0000 (0:00:00.125) 0:00:20.823 **** 2025-09-27 21:57:56.957906 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.957914 | orchestrator | 2025-09-27 21:57:56.957923 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-27 21:57:56.957932 | orchestrator | Saturday 27 September 2025 21:57:55 +0000 (0:00:00.140) 0:00:20.963 **** 2025-09-27 21:57:56.957940 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.957949 | orchestrator | 2025-09-27 21:57:56.957957 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-27 21:57:56.958001 | orchestrator | Saturday 27 September 2025 21:57:55 +0000 (0:00:00.131) 0:00:21.095 **** 2025-09-27 21:57:56.958012 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'})  2025-09-27 21:57:56.958079 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'})  2025-09-27 21:57:56.958088 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.958097 | orchestrator | 2025-09-27 21:57:56.958106 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-27 21:57:56.958115 | orchestrator | Saturday 27 September 2025 21:57:56 +0000 (0:00:00.161) 0:00:21.256 **** 2025-09-27 21:57:56.958124 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'})  2025-09-27 21:57:56.958133 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'})  2025-09-27 21:57:56.958141 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.958150 | orchestrator | 2025-09-27 21:57:56.958159 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-27 21:57:56.958167 | orchestrator | Saturday 27 September 2025 21:57:56 +0000 (0:00:00.326) 0:00:21.583 **** 2025-09-27 21:57:56.958176 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'})  2025-09-27 21:57:56.958185 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'})  2025-09-27 21:57:56.958194 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.958203 | orchestrator | 2025-09-27 21:57:56.958212 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-27 21:57:56.958221 | orchestrator | Saturday 27 September 2025 21:57:56 +0000 (0:00:00.159) 0:00:21.743 **** 2025-09-27 21:57:56.958230 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'})  2025-09-27 21:57:56.958239 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'})  2025-09-27 21:57:56.958247 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.958256 | orchestrator | 2025-09-27 21:57:56.958265 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-27 21:57:56.958273 | orchestrator | Saturday 27 September 2025 21:57:56 +0000 (0:00:00.163) 0:00:21.906 **** 2025-09-27 21:57:56.958282 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'})  2025-09-27 21:57:56.958291 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'})  2025-09-27 21:57:56.958299 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:57:56.958315 | orchestrator | 2025-09-27 21:57:56.958324 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-27 21:57:56.958333 | orchestrator | Saturday 27 September 2025 21:57:56 +0000 (0:00:00.137) 0:00:22.044 **** 2025-09-27 21:57:56.958341 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'})  2025-09-27 21:57:56.958357 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'})  2025-09-27 21:58:01.829642 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:58:01.829785 | orchestrator | 2025-09-27 21:58:01.829813 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-27 21:58:01.829837 | orchestrator | Saturday 27 September 2025 21:57:56 +0000 (0:00:00.154) 0:00:22.198 **** 2025-09-27 21:58:01.829886 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'})  2025-09-27 21:58:01.829910 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'})  2025-09-27 21:58:01.829929 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:58:01.829948 | orchestrator | 2025-09-27 21:58:01.829968 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-27 21:58:01.830122 | orchestrator | Saturday 27 September 2025 21:57:57 +0000 (0:00:00.146) 0:00:22.344 **** 2025-09-27 21:58:01.830136 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'})  2025-09-27 21:58:01.830156 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'})  2025-09-27 21:58:01.830174 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:58:01.830191 | orchestrator | 2025-09-27 21:58:01.830211 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-27 21:58:01.830232 | orchestrator | Saturday 27 September 2025 21:57:57 +0000 (0:00:00.173) 0:00:22.518 **** 2025-09-27 21:58:01.830254 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:58:01.830274 | orchestrator | 2025-09-27 21:58:01.830294 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-27 21:58:01.830315 | orchestrator | Saturday 27 September 2025 21:57:57 +0000 (0:00:00.486) 0:00:23.005 **** 2025-09-27 21:58:01.830333 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:58:01.830350 | orchestrator | 2025-09-27 21:58:01.830364 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-27 21:58:01.830376 | orchestrator | Saturday 27 September 2025 21:57:58 +0000 (0:00:00.512) 0:00:23.518 **** 2025-09-27 21:58:01.830389 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:58:01.830402 | orchestrator | 2025-09-27 21:58:01.830414 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-27 21:58:01.830426 | orchestrator | Saturday 27 September 2025 21:57:58 +0000 (0:00:00.151) 0:00:23.670 **** 2025-09-27 21:58:01.830439 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'vg_name': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'}) 2025-09-27 21:58:01.830453 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'vg_name': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'}) 2025-09-27 21:58:01.830466 | orchestrator | 2025-09-27 21:58:01.830490 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-27 21:58:01.830503 | orchestrator | Saturday 27 September 2025 21:57:58 +0000 (0:00:00.172) 0:00:23.842 **** 2025-09-27 21:58:01.830516 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'})  2025-09-27 21:58:01.830551 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'})  2025-09-27 21:58:01.830562 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:58:01.830573 | orchestrator | 2025-09-27 21:58:01.830583 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-27 21:58:01.830594 | orchestrator | Saturday 27 September 2025 21:57:58 +0000 (0:00:00.141) 0:00:23.983 **** 2025-09-27 21:58:01.830604 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'})  2025-09-27 21:58:01.830615 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'})  2025-09-27 21:58:01.830626 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:58:01.830637 | orchestrator | 2025-09-27 21:58:01.830647 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-27 21:58:01.830658 | orchestrator | Saturday 27 September 2025 21:57:59 +0000 (0:00:00.270) 0:00:24.254 **** 2025-09-27 21:58:01.830669 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'})  2025-09-27 21:58:01.830680 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'})  2025-09-27 21:58:01.830690 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:58:01.830701 | orchestrator | 2025-09-27 21:58:01.830712 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-27 21:58:01.830722 | orchestrator | Saturday 27 September 2025 21:57:59 +0000 (0:00:00.141) 0:00:24.396 **** 2025-09-27 21:58:01.830733 | orchestrator | ok: [testbed-node-3] => { 2025-09-27 21:58:01.830744 | orchestrator |  "lvm_report": { 2025-09-27 21:58:01.830754 | orchestrator |  "lv": [ 2025-09-27 21:58:01.830765 | orchestrator |  { 2025-09-27 21:58:01.830798 | orchestrator |  "lv_name": "osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491", 2025-09-27 21:58:01.830810 | orchestrator |  "vg_name": "ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491" 2025-09-27 21:58:01.830821 | orchestrator |  }, 2025-09-27 21:58:01.830831 | orchestrator |  { 2025-09-27 21:58:01.830842 | orchestrator |  "lv_name": "osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188", 2025-09-27 21:58:01.830853 | orchestrator |  "vg_name": "ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188" 2025-09-27 21:58:01.830863 | orchestrator |  } 2025-09-27 21:58:01.830874 | orchestrator |  ], 2025-09-27 21:58:01.830885 | orchestrator |  "pv": [ 2025-09-27 21:58:01.830895 | orchestrator |  { 2025-09-27 21:58:01.830906 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-27 21:58:01.830917 | orchestrator |  "vg_name": "ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491" 2025-09-27 21:58:01.830927 | orchestrator |  }, 2025-09-27 21:58:01.830938 | orchestrator |  { 2025-09-27 21:58:01.830949 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-27 21:58:01.830959 | orchestrator |  "vg_name": "ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188" 2025-09-27 21:58:01.831004 | orchestrator |  } 2025-09-27 21:58:01.831018 | orchestrator |  ] 2025-09-27 21:58:01.831028 | orchestrator |  } 2025-09-27 21:58:01.831039 | orchestrator | } 2025-09-27 21:58:01.831050 | orchestrator | 2025-09-27 21:58:01.831061 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-27 21:58:01.831072 | orchestrator | 2025-09-27 21:58:01.831082 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-27 21:58:01.831093 | orchestrator | Saturday 27 September 2025 21:57:59 +0000 (0:00:00.272) 0:00:24.668 **** 2025-09-27 21:58:01.831104 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-27 21:58:01.831123 | orchestrator | 2025-09-27 21:58:01.831134 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-27 21:58:01.831145 | orchestrator | Saturday 27 September 2025 21:57:59 +0000 (0:00:00.213) 0:00:24.881 **** 2025-09-27 21:58:01.831155 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:58:01.831166 | orchestrator | 2025-09-27 21:58:01.831177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:01.831188 | orchestrator | Saturday 27 September 2025 21:57:59 +0000 (0:00:00.212) 0:00:25.094 **** 2025-09-27 21:58:01.831207 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-27 21:58:01.831226 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-27 21:58:01.831246 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-27 21:58:01.831265 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-27 21:58:01.831284 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-27 21:58:01.831302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-27 21:58:01.831314 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-27 21:58:01.831332 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-27 21:58:01.831343 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-27 21:58:01.831354 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-27 21:58:01.831365 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-27 21:58:01.831375 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-27 21:58:01.831386 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-27 21:58:01.831397 | orchestrator | 2025-09-27 21:58:01.831407 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:01.831418 | orchestrator | Saturday 27 September 2025 21:58:00 +0000 (0:00:00.356) 0:00:25.451 **** 2025-09-27 21:58:01.831429 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:01.831439 | orchestrator | 2025-09-27 21:58:01.831450 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:01.831461 | orchestrator | Saturday 27 September 2025 21:58:00 +0000 (0:00:00.179) 0:00:25.630 **** 2025-09-27 21:58:01.831472 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:01.831482 | orchestrator | 2025-09-27 21:58:01.831493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:01.831504 | orchestrator | Saturday 27 September 2025 21:58:00 +0000 (0:00:00.175) 0:00:25.806 **** 2025-09-27 21:58:01.831514 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:01.831525 | orchestrator | 2025-09-27 21:58:01.831536 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:01.831546 | orchestrator | Saturday 27 September 2025 21:58:00 +0000 (0:00:00.183) 0:00:25.990 **** 2025-09-27 21:58:01.831557 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:01.831568 | orchestrator | 2025-09-27 21:58:01.831578 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:01.831589 | orchestrator | Saturday 27 September 2025 21:58:01 +0000 (0:00:00.529) 0:00:26.519 **** 2025-09-27 21:58:01.831600 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:01.831610 | orchestrator | 2025-09-27 21:58:01.831621 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:01.831632 | orchestrator | Saturday 27 September 2025 21:58:01 +0000 (0:00:00.196) 0:00:26.715 **** 2025-09-27 21:58:01.831642 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:01.831653 | orchestrator | 2025-09-27 21:58:01.831672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:01.831682 | orchestrator | Saturday 27 September 2025 21:58:01 +0000 (0:00:00.182) 0:00:26.897 **** 2025-09-27 21:58:01.831693 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:01.831704 | orchestrator | 2025-09-27 21:58:01.831724 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:11.500551 | orchestrator | Saturday 27 September 2025 21:58:01 +0000 (0:00:00.171) 0:00:27.068 **** 2025-09-27 21:58:11.500672 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:11.500690 | orchestrator | 2025-09-27 21:58:11.500702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:11.500713 | orchestrator | Saturday 27 September 2025 21:58:01 +0000 (0:00:00.179) 0:00:27.248 **** 2025-09-27 21:58:11.500725 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43) 2025-09-27 21:58:11.500737 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43) 2025-09-27 21:58:11.500748 | orchestrator | 2025-09-27 21:58:11.500759 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:11.500769 | orchestrator | Saturday 27 September 2025 21:58:02 +0000 (0:00:00.370) 0:00:27.619 **** 2025-09-27 21:58:11.500780 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f54ee983-9faf-4784-aff9-7d79079ed7ae) 2025-09-27 21:58:11.500791 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f54ee983-9faf-4784-aff9-7d79079ed7ae) 2025-09-27 21:58:11.500802 | orchestrator | 2025-09-27 21:58:11.500812 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:11.500823 | orchestrator | Saturday 27 September 2025 21:58:02 +0000 (0:00:00.366) 0:00:27.986 **** 2025-09-27 21:58:11.500834 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_270d9e8b-cef6-4542-9e07-9deadafed901) 2025-09-27 21:58:11.500844 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_270d9e8b-cef6-4542-9e07-9deadafed901) 2025-09-27 21:58:11.500855 | orchestrator | 2025-09-27 21:58:11.500866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:11.500876 | orchestrator | Saturday 27 September 2025 21:58:03 +0000 (0:00:00.398) 0:00:28.384 **** 2025-09-27 21:58:11.500887 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5c98ed57-cbba-4a71-94c9-227184fafc60) 2025-09-27 21:58:11.500898 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5c98ed57-cbba-4a71-94c9-227184fafc60) 2025-09-27 21:58:11.500908 | orchestrator | 2025-09-27 21:58:11.500919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:11.500930 | orchestrator | Saturday 27 September 2025 21:58:03 +0000 (0:00:00.358) 0:00:28.743 **** 2025-09-27 21:58:11.500940 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-27 21:58:11.500951 | orchestrator | 2025-09-27 21:58:11.500962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:11.500972 | orchestrator | Saturday 27 September 2025 21:58:03 +0000 (0:00:00.304) 0:00:29.048 **** 2025-09-27 21:58:11.501006 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-27 21:58:11.501020 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-27 21:58:11.501032 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-27 21:58:11.501045 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-27 21:58:11.501057 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-27 21:58:11.501069 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-27 21:58:11.501081 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-27 21:58:11.501116 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-27 21:58:11.501129 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-27 21:58:11.501141 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-27 21:58:11.501152 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-27 21:58:11.501164 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-27 21:58:11.501176 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-27 21:58:11.501189 | orchestrator | 2025-09-27 21:58:11.501219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:11.501231 | orchestrator | Saturday 27 September 2025 21:58:04 +0000 (0:00:00.543) 0:00:29.592 **** 2025-09-27 21:58:11.501244 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:11.501255 | orchestrator | 2025-09-27 21:58:11.501268 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:11.501280 | orchestrator | Saturday 27 September 2025 21:58:04 +0000 (0:00:00.195) 0:00:29.788 **** 2025-09-27 21:58:11.501292 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:11.501305 | orchestrator | 2025-09-27 21:58:11.501316 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:11.501326 | orchestrator | Saturday 27 September 2025 21:58:04 +0000 (0:00:00.188) 0:00:29.976 **** 2025-09-27 21:58:11.501336 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:11.501347 | orchestrator | 2025-09-27 21:58:11.501358 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:11.501368 | orchestrator | Saturday 27 September 2025 21:58:04 +0000 (0:00:00.182) 0:00:30.159 **** 2025-09-27 21:58:11.501379 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:11.501390 | orchestrator | 2025-09-27 21:58:11.501419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:11.501430 | orchestrator | Saturday 27 September 2025 21:58:05 +0000 (0:00:00.180) 0:00:30.339 **** 2025-09-27 21:58:11.501441 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:11.501452 | orchestrator | 2025-09-27 21:58:11.501462 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:11.501473 | orchestrator | Saturday 27 September 2025 21:58:05 +0000 (0:00:00.168) 0:00:30.508 **** 2025-09-27 21:58:11.501483 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:11.501494 | orchestrator | 2025-09-27 21:58:11.501504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:11.501515 | orchestrator | Saturday 27 September 2025 21:58:05 +0000 (0:00:00.194) 0:00:30.702 **** 2025-09-27 21:58:11.501525 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:11.501536 | orchestrator | 2025-09-27 21:58:11.501546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:11.501557 | orchestrator | Saturday 27 September 2025 21:58:05 +0000 (0:00:00.196) 0:00:30.899 **** 2025-09-27 21:58:11.501567 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:11.501577 | orchestrator | 2025-09-27 21:58:11.501588 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:11.501599 | orchestrator | Saturday 27 September 2025 21:58:05 +0000 (0:00:00.185) 0:00:31.084 **** 2025-09-27 21:58:11.501609 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-27 21:58:11.501620 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-27 21:58:11.501631 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-27 21:58:11.501642 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-27 21:58:11.501653 | orchestrator | 2025-09-27 21:58:11.501664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:11.501674 | orchestrator | Saturday 27 September 2025 21:58:06 +0000 (0:00:00.758) 0:00:31.842 **** 2025-09-27 21:58:11.501693 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:11.501704 | orchestrator | 2025-09-27 21:58:11.501715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:11.501725 | orchestrator | Saturday 27 September 2025 21:58:06 +0000 (0:00:00.184) 0:00:32.027 **** 2025-09-27 21:58:11.501736 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:11.501746 | orchestrator | 2025-09-27 21:58:11.501757 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:11.501768 | orchestrator | Saturday 27 September 2025 21:58:06 +0000 (0:00:00.198) 0:00:32.226 **** 2025-09-27 21:58:11.501778 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:11.501789 | orchestrator | 2025-09-27 21:58:11.501800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:11.501810 | orchestrator | Saturday 27 September 2025 21:58:07 +0000 (0:00:00.528) 0:00:32.754 **** 2025-09-27 21:58:11.501821 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:11.501831 | orchestrator | 2025-09-27 21:58:11.501842 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-27 21:58:11.501852 | orchestrator | Saturday 27 September 2025 21:58:07 +0000 (0:00:00.228) 0:00:32.982 **** 2025-09-27 21:58:11.501869 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:11.501879 | orchestrator | 2025-09-27 21:58:11.501890 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-27 21:58:11.501901 | orchestrator | Saturday 27 September 2025 21:58:07 +0000 (0:00:00.134) 0:00:33.116 **** 2025-09-27 21:58:11.501911 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'be08f40e-52da-5801-960c-910a686d222b'}}) 2025-09-27 21:58:11.501922 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a2801305-6ac8-5a65-9707-7cc055d05458'}}) 2025-09-27 21:58:11.501933 | orchestrator | 2025-09-27 21:58:11.501943 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-27 21:58:11.501954 | orchestrator | Saturday 27 September 2025 21:58:08 +0000 (0:00:00.179) 0:00:33.296 **** 2025-09-27 21:58:11.501966 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'}) 2025-09-27 21:58:11.501978 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'}) 2025-09-27 21:58:11.502011 | orchestrator | 2025-09-27 21:58:11.502092 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-27 21:58:11.502103 | orchestrator | Saturday 27 September 2025 21:58:10 +0000 (0:00:01.955) 0:00:35.252 **** 2025-09-27 21:58:11.502114 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'})  2025-09-27 21:58:11.502127 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'})  2025-09-27 21:58:11.502138 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:11.502149 | orchestrator | 2025-09-27 21:58:11.502159 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-27 21:58:11.502170 | orchestrator | Saturday 27 September 2025 21:58:10 +0000 (0:00:00.148) 0:00:35.400 **** 2025-09-27 21:58:11.502181 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'}) 2025-09-27 21:58:11.502192 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'}) 2025-09-27 21:58:11.502202 | orchestrator | 2025-09-27 21:58:11.502221 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-27 21:58:16.612423 | orchestrator | Saturday 27 September 2025 21:58:11 +0000 (0:00:01.338) 0:00:36.739 **** 2025-09-27 21:58:16.612567 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'})  2025-09-27 21:58:16.612585 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'})  2025-09-27 21:58:16.612597 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:16.612609 | orchestrator | 2025-09-27 21:58:16.612621 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-27 21:58:16.612633 | orchestrator | Saturday 27 September 2025 21:58:11 +0000 (0:00:00.147) 0:00:36.886 **** 2025-09-27 21:58:16.612643 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:16.612654 | orchestrator | 2025-09-27 21:58:16.612665 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-27 21:58:16.612676 | orchestrator | Saturday 27 September 2025 21:58:11 +0000 (0:00:00.128) 0:00:37.015 **** 2025-09-27 21:58:16.612687 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'})  2025-09-27 21:58:16.612698 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'})  2025-09-27 21:58:16.612709 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:16.612719 | orchestrator | 2025-09-27 21:58:16.612730 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-27 21:58:16.612741 | orchestrator | Saturday 27 September 2025 21:58:11 +0000 (0:00:00.163) 0:00:37.178 **** 2025-09-27 21:58:16.612751 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:16.612762 | orchestrator | 2025-09-27 21:58:16.612773 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-27 21:58:16.612783 | orchestrator | Saturday 27 September 2025 21:58:12 +0000 (0:00:00.136) 0:00:37.314 **** 2025-09-27 21:58:16.612794 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'})  2025-09-27 21:58:16.612805 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'})  2025-09-27 21:58:16.612816 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:16.612827 | orchestrator | 2025-09-27 21:58:16.612837 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-27 21:58:16.612848 | orchestrator | Saturday 27 September 2025 21:58:12 +0000 (0:00:00.141) 0:00:37.456 **** 2025-09-27 21:58:16.612874 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:16.612886 | orchestrator | 2025-09-27 21:58:16.612896 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-27 21:58:16.612907 | orchestrator | Saturday 27 September 2025 21:58:12 +0000 (0:00:00.279) 0:00:37.735 **** 2025-09-27 21:58:16.612918 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'})  2025-09-27 21:58:16.612928 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'})  2025-09-27 21:58:16.612939 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:16.612951 | orchestrator | 2025-09-27 21:58:16.612964 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-27 21:58:16.612976 | orchestrator | Saturday 27 September 2025 21:58:12 +0000 (0:00:00.136) 0:00:37.872 **** 2025-09-27 21:58:16.612988 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:58:16.613029 | orchestrator | 2025-09-27 21:58:16.613041 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-27 21:58:16.613053 | orchestrator | Saturday 27 September 2025 21:58:12 +0000 (0:00:00.133) 0:00:38.005 **** 2025-09-27 21:58:16.613077 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'})  2025-09-27 21:58:16.613090 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'})  2025-09-27 21:58:16.613102 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:16.613115 | orchestrator | 2025-09-27 21:58:16.613127 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-27 21:58:16.613139 | orchestrator | Saturday 27 September 2025 21:58:12 +0000 (0:00:00.141) 0:00:38.147 **** 2025-09-27 21:58:16.613152 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'})  2025-09-27 21:58:16.613165 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'})  2025-09-27 21:58:16.613177 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:16.613189 | orchestrator | 2025-09-27 21:58:16.613202 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-27 21:58:16.613214 | orchestrator | Saturday 27 September 2025 21:58:13 +0000 (0:00:00.134) 0:00:38.282 **** 2025-09-27 21:58:16.613244 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'})  2025-09-27 21:58:16.613258 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'})  2025-09-27 21:58:16.613271 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:16.613283 | orchestrator | 2025-09-27 21:58:16.613296 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-27 21:58:16.613308 | orchestrator | Saturday 27 September 2025 21:58:13 +0000 (0:00:00.135) 0:00:38.417 **** 2025-09-27 21:58:16.613320 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:16.613331 | orchestrator | 2025-09-27 21:58:16.613341 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-27 21:58:16.613352 | orchestrator | Saturday 27 September 2025 21:58:13 +0000 (0:00:00.118) 0:00:38.536 **** 2025-09-27 21:58:16.613363 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:16.613374 | orchestrator | 2025-09-27 21:58:16.613384 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-27 21:58:16.613395 | orchestrator | Saturday 27 September 2025 21:58:13 +0000 (0:00:00.135) 0:00:38.672 **** 2025-09-27 21:58:16.613406 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:16.613416 | orchestrator | 2025-09-27 21:58:16.613427 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-27 21:58:16.613438 | orchestrator | Saturday 27 September 2025 21:58:13 +0000 (0:00:00.118) 0:00:38.790 **** 2025-09-27 21:58:16.613448 | orchestrator | ok: [testbed-node-4] => { 2025-09-27 21:58:16.613459 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-27 21:58:16.613470 | orchestrator | } 2025-09-27 21:58:16.613481 | orchestrator | 2025-09-27 21:58:16.613492 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-27 21:58:16.613503 | orchestrator | Saturday 27 September 2025 21:58:13 +0000 (0:00:00.127) 0:00:38.918 **** 2025-09-27 21:58:16.613513 | orchestrator | ok: [testbed-node-4] => { 2025-09-27 21:58:16.613524 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-27 21:58:16.613535 | orchestrator | } 2025-09-27 21:58:16.613545 | orchestrator | 2025-09-27 21:58:16.613556 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-27 21:58:16.613566 | orchestrator | Saturday 27 September 2025 21:58:13 +0000 (0:00:00.133) 0:00:39.052 **** 2025-09-27 21:58:16.613577 | orchestrator | ok: [testbed-node-4] => { 2025-09-27 21:58:16.613588 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-27 21:58:16.613620 | orchestrator | } 2025-09-27 21:58:16.613631 | orchestrator | 2025-09-27 21:58:16.613642 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-27 21:58:16.613653 | orchestrator | Saturday 27 September 2025 21:58:13 +0000 (0:00:00.132) 0:00:39.184 **** 2025-09-27 21:58:16.613664 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:58:16.613674 | orchestrator | 2025-09-27 21:58:16.613685 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-27 21:58:16.613696 | orchestrator | Saturday 27 September 2025 21:58:14 +0000 (0:00:00.662) 0:00:39.847 **** 2025-09-27 21:58:16.613707 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:58:16.613718 | orchestrator | 2025-09-27 21:58:16.613729 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-27 21:58:16.613740 | orchestrator | Saturday 27 September 2025 21:58:15 +0000 (0:00:00.502) 0:00:40.349 **** 2025-09-27 21:58:16.613751 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:58:16.613762 | orchestrator | 2025-09-27 21:58:16.613772 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-27 21:58:16.613783 | orchestrator | Saturday 27 September 2025 21:58:15 +0000 (0:00:00.501) 0:00:40.850 **** 2025-09-27 21:58:16.613794 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:58:16.613804 | orchestrator | 2025-09-27 21:58:16.613815 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-27 21:58:16.613826 | orchestrator | Saturday 27 September 2025 21:58:15 +0000 (0:00:00.160) 0:00:41.011 **** 2025-09-27 21:58:16.613836 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:16.613847 | orchestrator | 2025-09-27 21:58:16.613858 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-27 21:58:16.613869 | orchestrator | Saturday 27 September 2025 21:58:15 +0000 (0:00:00.089) 0:00:41.101 **** 2025-09-27 21:58:16.613879 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:16.613890 | orchestrator | 2025-09-27 21:58:16.613901 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-27 21:58:16.613911 | orchestrator | Saturday 27 September 2025 21:58:15 +0000 (0:00:00.105) 0:00:41.206 **** 2025-09-27 21:58:16.613922 | orchestrator | ok: [testbed-node-4] => { 2025-09-27 21:58:16.613933 | orchestrator |  "vgs_report": { 2025-09-27 21:58:16.613944 | orchestrator |  "vg": [] 2025-09-27 21:58:16.613954 | orchestrator |  } 2025-09-27 21:58:16.613965 | orchestrator | } 2025-09-27 21:58:16.613976 | orchestrator | 2025-09-27 21:58:16.613987 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-27 21:58:16.614069 | orchestrator | Saturday 27 September 2025 21:58:16 +0000 (0:00:00.123) 0:00:41.330 **** 2025-09-27 21:58:16.614083 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:16.614095 | orchestrator | 2025-09-27 21:58:16.614106 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-27 21:58:16.614116 | orchestrator | Saturday 27 September 2025 21:58:16 +0000 (0:00:00.123) 0:00:41.453 **** 2025-09-27 21:58:16.614127 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:16.614138 | orchestrator | 2025-09-27 21:58:16.614157 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-27 21:58:16.614168 | orchestrator | Saturday 27 September 2025 21:58:16 +0000 (0:00:00.133) 0:00:41.587 **** 2025-09-27 21:58:16.614179 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:16.614189 | orchestrator | 2025-09-27 21:58:16.614200 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-27 21:58:16.614211 | orchestrator | Saturday 27 September 2025 21:58:16 +0000 (0:00:00.137) 0:00:41.725 **** 2025-09-27 21:58:16.614222 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:16.614233 | orchestrator | 2025-09-27 21:58:16.614244 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-27 21:58:16.614262 | orchestrator | Saturday 27 September 2025 21:58:16 +0000 (0:00:00.126) 0:00:41.852 **** 2025-09-27 21:58:21.014480 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:21.014626 | orchestrator | 2025-09-27 21:58:21.014690 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-27 21:58:21.014714 | orchestrator | Saturday 27 September 2025 21:58:16 +0000 (0:00:00.122) 0:00:41.974 **** 2025-09-27 21:58:21.014734 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:21.014753 | orchestrator | 2025-09-27 21:58:21.014773 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-27 21:58:21.014786 | orchestrator | Saturday 27 September 2025 21:58:16 +0000 (0:00:00.267) 0:00:42.241 **** 2025-09-27 21:58:21.014796 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:21.014809 | orchestrator | 2025-09-27 21:58:21.014828 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-27 21:58:21.014845 | orchestrator | Saturday 27 September 2025 21:58:17 +0000 (0:00:00.131) 0:00:42.373 **** 2025-09-27 21:58:21.014864 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:21.014883 | orchestrator | 2025-09-27 21:58:21.014897 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-27 21:58:21.014909 | orchestrator | Saturday 27 September 2025 21:58:17 +0000 (0:00:00.133) 0:00:42.506 **** 2025-09-27 21:58:21.014919 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:21.014930 | orchestrator | 2025-09-27 21:58:21.014940 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-27 21:58:21.014951 | orchestrator | Saturday 27 September 2025 21:58:17 +0000 (0:00:00.118) 0:00:42.625 **** 2025-09-27 21:58:21.014962 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:21.014981 | orchestrator | 2025-09-27 21:58:21.015030 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-27 21:58:21.015051 | orchestrator | Saturday 27 September 2025 21:58:17 +0000 (0:00:00.134) 0:00:42.759 **** 2025-09-27 21:58:21.015071 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:21.015089 | orchestrator | 2025-09-27 21:58:21.015109 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-27 21:58:21.015124 | orchestrator | Saturday 27 September 2025 21:58:17 +0000 (0:00:00.112) 0:00:42.872 **** 2025-09-27 21:58:21.015135 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:21.015147 | orchestrator | 2025-09-27 21:58:21.015160 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-27 21:58:21.015172 | orchestrator | Saturday 27 September 2025 21:58:17 +0000 (0:00:00.126) 0:00:42.998 **** 2025-09-27 21:58:21.015185 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:21.015196 | orchestrator | 2025-09-27 21:58:21.015209 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-27 21:58:21.015221 | orchestrator | Saturday 27 September 2025 21:58:17 +0000 (0:00:00.139) 0:00:43.137 **** 2025-09-27 21:58:21.015234 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:21.015246 | orchestrator | 2025-09-27 21:58:21.015258 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-27 21:58:21.015271 | orchestrator | Saturday 27 September 2025 21:58:18 +0000 (0:00:00.119) 0:00:43.257 **** 2025-09-27 21:58:21.015300 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'})  2025-09-27 21:58:21.015316 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'})  2025-09-27 21:58:21.015330 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:21.015340 | orchestrator | 2025-09-27 21:58:21.015351 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-27 21:58:21.015362 | orchestrator | Saturday 27 September 2025 21:58:18 +0000 (0:00:00.120) 0:00:43.378 **** 2025-09-27 21:58:21.015373 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'})  2025-09-27 21:58:21.015384 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'})  2025-09-27 21:58:21.015407 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:21.015417 | orchestrator | 2025-09-27 21:58:21.015428 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-27 21:58:21.015439 | orchestrator | Saturday 27 September 2025 21:58:18 +0000 (0:00:00.112) 0:00:43.491 **** 2025-09-27 21:58:21.015449 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'})  2025-09-27 21:58:21.015460 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'})  2025-09-27 21:58:21.015471 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:21.015481 | orchestrator | 2025-09-27 21:58:21.015492 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-27 21:58:21.015503 | orchestrator | Saturday 27 September 2025 21:58:18 +0000 (0:00:00.126) 0:00:43.617 **** 2025-09-27 21:58:21.015513 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'})  2025-09-27 21:58:21.015524 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'})  2025-09-27 21:58:21.015535 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:21.015545 | orchestrator | 2025-09-27 21:58:21.015556 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-27 21:58:21.015588 | orchestrator | Saturday 27 September 2025 21:58:18 +0000 (0:00:00.261) 0:00:43.878 **** 2025-09-27 21:58:21.015599 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'})  2025-09-27 21:58:21.015610 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'})  2025-09-27 21:58:21.015621 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:21.015631 | orchestrator | 2025-09-27 21:58:21.015642 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-27 21:58:21.015652 | orchestrator | Saturday 27 September 2025 21:58:18 +0000 (0:00:00.131) 0:00:44.010 **** 2025-09-27 21:58:21.015663 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'})  2025-09-27 21:58:21.015674 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'})  2025-09-27 21:58:21.015684 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:21.015696 | orchestrator | 2025-09-27 21:58:21.015706 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-27 21:58:21.015717 | orchestrator | Saturday 27 September 2025 21:58:18 +0000 (0:00:00.173) 0:00:44.183 **** 2025-09-27 21:58:21.015728 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'})  2025-09-27 21:58:21.015738 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'})  2025-09-27 21:58:21.015749 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:21.015760 | orchestrator | 2025-09-27 21:58:21.015770 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-27 21:58:21.015781 | orchestrator | Saturday 27 September 2025 21:58:19 +0000 (0:00:00.156) 0:00:44.339 **** 2025-09-27 21:58:21.015791 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'})  2025-09-27 21:58:21.015810 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'})  2025-09-27 21:58:21.015821 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:21.015831 | orchestrator | 2025-09-27 21:58:21.015848 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-27 21:58:21.015859 | orchestrator | Saturday 27 September 2025 21:58:19 +0000 (0:00:00.177) 0:00:44.517 **** 2025-09-27 21:58:21.015869 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:58:21.015880 | orchestrator | 2025-09-27 21:58:21.015891 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-27 21:58:21.015901 | orchestrator | Saturday 27 September 2025 21:58:19 +0000 (0:00:00.525) 0:00:45.043 **** 2025-09-27 21:58:21.015912 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:58:21.015923 | orchestrator | 2025-09-27 21:58:21.015933 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-27 21:58:21.015944 | orchestrator | Saturday 27 September 2025 21:58:20 +0000 (0:00:00.525) 0:00:45.568 **** 2025-09-27 21:58:21.015954 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:58:21.015965 | orchestrator | 2025-09-27 21:58:21.015975 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-27 21:58:21.015986 | orchestrator | Saturday 27 September 2025 21:58:20 +0000 (0:00:00.158) 0:00:45.727 **** 2025-09-27 21:58:21.016038 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'vg_name': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'}) 2025-09-27 21:58:21.016053 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'vg_name': 'ceph-be08f40e-52da-5801-960c-910a686d222b'}) 2025-09-27 21:58:21.016064 | orchestrator | 2025-09-27 21:58:21.016075 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-27 21:58:21.016086 | orchestrator | Saturday 27 September 2025 21:58:20 +0000 (0:00:00.184) 0:00:45.911 **** 2025-09-27 21:58:21.016097 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'})  2025-09-27 21:58:21.016108 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'})  2025-09-27 21:58:21.016118 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:21.016129 | orchestrator | 2025-09-27 21:58:21.016140 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-27 21:58:21.016150 | orchestrator | Saturday 27 September 2025 21:58:20 +0000 (0:00:00.170) 0:00:46.082 **** 2025-09-27 21:58:21.016161 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'})  2025-09-27 21:58:21.016172 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'})  2025-09-27 21:58:21.016191 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:26.727365 | orchestrator | 2025-09-27 21:58:26.727462 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-27 21:58:26.727479 | orchestrator | Saturday 27 September 2025 21:58:21 +0000 (0:00:00.173) 0:00:46.255 **** 2025-09-27 21:58:26.727493 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'})  2025-09-27 21:58:26.727506 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'})  2025-09-27 21:58:26.727518 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:58:26.727531 | orchestrator | 2025-09-27 21:58:26.727543 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-27 21:58:26.727555 | orchestrator | Saturday 27 September 2025 21:58:21 +0000 (0:00:00.183) 0:00:46.439 **** 2025-09-27 21:58:26.727585 | orchestrator | ok: [testbed-node-4] => { 2025-09-27 21:58:26.727597 | orchestrator |  "lvm_report": { 2025-09-27 21:58:26.727610 | orchestrator |  "lv": [ 2025-09-27 21:58:26.727622 | orchestrator |  { 2025-09-27 21:58:26.727634 | orchestrator |  "lv_name": "osd-block-a2801305-6ac8-5a65-9707-7cc055d05458", 2025-09-27 21:58:26.727646 | orchestrator |  "vg_name": "ceph-a2801305-6ac8-5a65-9707-7cc055d05458" 2025-09-27 21:58:26.727658 | orchestrator |  }, 2025-09-27 21:58:26.727669 | orchestrator |  { 2025-09-27 21:58:26.727681 | orchestrator |  "lv_name": "osd-block-be08f40e-52da-5801-960c-910a686d222b", 2025-09-27 21:58:26.727692 | orchestrator |  "vg_name": "ceph-be08f40e-52da-5801-960c-910a686d222b" 2025-09-27 21:58:26.727703 | orchestrator |  } 2025-09-27 21:58:26.727715 | orchestrator |  ], 2025-09-27 21:58:26.727726 | orchestrator |  "pv": [ 2025-09-27 21:58:26.727737 | orchestrator |  { 2025-09-27 21:58:26.727749 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-27 21:58:26.727760 | orchestrator |  "vg_name": "ceph-be08f40e-52da-5801-960c-910a686d222b" 2025-09-27 21:58:26.727772 | orchestrator |  }, 2025-09-27 21:58:26.727783 | orchestrator |  { 2025-09-27 21:58:26.727794 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-27 21:58:26.727806 | orchestrator |  "vg_name": "ceph-a2801305-6ac8-5a65-9707-7cc055d05458" 2025-09-27 21:58:26.727817 | orchestrator |  } 2025-09-27 21:58:26.727851 | orchestrator |  ] 2025-09-27 21:58:26.727862 | orchestrator |  } 2025-09-27 21:58:26.727873 | orchestrator | } 2025-09-27 21:58:26.727884 | orchestrator | 2025-09-27 21:58:26.727895 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-27 21:58:26.727906 | orchestrator | 2025-09-27 21:58:26.727919 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-27 21:58:26.727931 | orchestrator | Saturday 27 September 2025 21:58:21 +0000 (0:00:00.489) 0:00:46.928 **** 2025-09-27 21:58:26.727944 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-27 21:58:26.727957 | orchestrator | 2025-09-27 21:58:26.727970 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-27 21:58:26.727982 | orchestrator | Saturday 27 September 2025 21:58:21 +0000 (0:00:00.255) 0:00:47.184 **** 2025-09-27 21:58:26.727994 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:58:26.728030 | orchestrator | 2025-09-27 21:58:26.728044 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:26.728056 | orchestrator | Saturday 27 September 2025 21:58:22 +0000 (0:00:00.227) 0:00:47.411 **** 2025-09-27 21:58:26.728068 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-27 21:58:26.728080 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-27 21:58:26.728093 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-27 21:58:26.728104 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-27 21:58:26.728116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-27 21:58:26.728128 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-27 21:58:26.728140 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-27 21:58:26.728152 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-27 21:58:26.728164 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-27 21:58:26.728176 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-27 21:58:26.728189 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-27 21:58:26.728208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-27 21:58:26.728220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-27 21:58:26.728231 | orchestrator | 2025-09-27 21:58:26.728243 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:26.728255 | orchestrator | Saturday 27 September 2025 21:58:22 +0000 (0:00:00.387) 0:00:47.799 **** 2025-09-27 21:58:26.728267 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:26.728282 | orchestrator | 2025-09-27 21:58:26.728293 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:26.728304 | orchestrator | Saturday 27 September 2025 21:58:22 +0000 (0:00:00.193) 0:00:47.993 **** 2025-09-27 21:58:26.728314 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:26.728325 | orchestrator | 2025-09-27 21:58:26.728335 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:26.728362 | orchestrator | Saturday 27 September 2025 21:58:22 +0000 (0:00:00.176) 0:00:48.169 **** 2025-09-27 21:58:26.728373 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:26.728384 | orchestrator | 2025-09-27 21:58:26.728395 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:26.728405 | orchestrator | Saturday 27 September 2025 21:58:23 +0000 (0:00:00.182) 0:00:48.351 **** 2025-09-27 21:58:26.728416 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:26.728426 | orchestrator | 2025-09-27 21:58:26.728437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:26.728448 | orchestrator | Saturday 27 September 2025 21:58:23 +0000 (0:00:00.182) 0:00:48.534 **** 2025-09-27 21:58:26.728458 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:26.728469 | orchestrator | 2025-09-27 21:58:26.728479 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:26.728490 | orchestrator | Saturday 27 September 2025 21:58:23 +0000 (0:00:00.178) 0:00:48.713 **** 2025-09-27 21:58:26.728501 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:26.728511 | orchestrator | 2025-09-27 21:58:26.728522 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:26.728532 | orchestrator | Saturday 27 September 2025 21:58:23 +0000 (0:00:00.467) 0:00:49.181 **** 2025-09-27 21:58:26.728543 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:26.728553 | orchestrator | 2025-09-27 21:58:26.728564 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:26.728574 | orchestrator | Saturday 27 September 2025 21:58:24 +0000 (0:00:00.198) 0:00:49.379 **** 2025-09-27 21:58:26.728585 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:26.728595 | orchestrator | 2025-09-27 21:58:26.728606 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:26.728617 | orchestrator | Saturday 27 September 2025 21:58:24 +0000 (0:00:00.190) 0:00:49.570 **** 2025-09-27 21:58:26.728627 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187) 2025-09-27 21:58:26.728679 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187) 2025-09-27 21:58:26.728692 | orchestrator | 2025-09-27 21:58:26.728703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:26.728714 | orchestrator | Saturday 27 September 2025 21:58:24 +0000 (0:00:00.390) 0:00:49.960 **** 2025-09-27 21:58:26.728724 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c35b6dae-9fd6-477e-b9cb-11e140c89f55) 2025-09-27 21:58:26.728735 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c35b6dae-9fd6-477e-b9cb-11e140c89f55) 2025-09-27 21:58:26.728746 | orchestrator | 2025-09-27 21:58:26.728756 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:26.728767 | orchestrator | Saturday 27 September 2025 21:58:25 +0000 (0:00:00.390) 0:00:50.351 **** 2025-09-27 21:58:26.728789 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_347ca9a0-83dc-4ac7-930f-213626cd3e96) 2025-09-27 21:58:26.728800 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_347ca9a0-83dc-4ac7-930f-213626cd3e96) 2025-09-27 21:58:26.728811 | orchestrator | 2025-09-27 21:58:26.728821 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:26.728832 | orchestrator | Saturday 27 September 2025 21:58:25 +0000 (0:00:00.395) 0:00:50.747 **** 2025-09-27 21:58:26.728842 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6ce21c34-3cf8-4892-a084-795bd672264f) 2025-09-27 21:58:26.728853 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6ce21c34-3cf8-4892-a084-795bd672264f) 2025-09-27 21:58:26.728864 | orchestrator | 2025-09-27 21:58:26.728874 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:58:26.728957 | orchestrator | Saturday 27 September 2025 21:58:25 +0000 (0:00:00.436) 0:00:51.183 **** 2025-09-27 21:58:26.728976 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-27 21:58:26.728992 | orchestrator | 2025-09-27 21:58:26.729044 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:26.729061 | orchestrator | Saturday 27 September 2025 21:58:26 +0000 (0:00:00.367) 0:00:51.551 **** 2025-09-27 21:58:26.729077 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-27 21:58:26.729093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-27 21:58:26.729110 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-27 21:58:26.729127 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-27 21:58:26.729143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-27 21:58:26.729159 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-27 21:58:26.729175 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-27 21:58:26.729192 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-27 21:58:26.729211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-27 21:58:26.729230 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-27 21:58:26.729249 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-27 21:58:26.729279 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-27 21:58:35.854353 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-27 21:58:35.854450 | orchestrator | 2025-09-27 21:58:35.854466 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:35.854480 | orchestrator | Saturday 27 September 2025 21:58:26 +0000 (0:00:00.409) 0:00:51.960 **** 2025-09-27 21:58:35.854492 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:35.854505 | orchestrator | 2025-09-27 21:58:35.854517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:35.854529 | orchestrator | Saturday 27 September 2025 21:58:26 +0000 (0:00:00.193) 0:00:52.154 **** 2025-09-27 21:58:35.854540 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:35.854552 | orchestrator | 2025-09-27 21:58:35.854563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:35.854575 | orchestrator | Saturday 27 September 2025 21:58:27 +0000 (0:00:00.205) 0:00:52.359 **** 2025-09-27 21:58:35.854586 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:35.854597 | orchestrator | 2025-09-27 21:58:35.854609 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:35.854643 | orchestrator | Saturday 27 September 2025 21:58:27 +0000 (0:00:00.626) 0:00:52.986 **** 2025-09-27 21:58:35.854655 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:35.854665 | orchestrator | 2025-09-27 21:58:35.854676 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:35.854687 | orchestrator | Saturday 27 September 2025 21:58:27 +0000 (0:00:00.200) 0:00:53.186 **** 2025-09-27 21:58:35.854697 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:35.854708 | orchestrator | 2025-09-27 21:58:35.854718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:35.854729 | orchestrator | Saturday 27 September 2025 21:58:28 +0000 (0:00:00.204) 0:00:53.390 **** 2025-09-27 21:58:35.854740 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:35.854750 | orchestrator | 2025-09-27 21:58:35.854761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:35.854771 | orchestrator | Saturday 27 September 2025 21:58:28 +0000 (0:00:00.207) 0:00:53.598 **** 2025-09-27 21:58:35.854782 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:35.854792 | orchestrator | 2025-09-27 21:58:35.854803 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:35.854831 | orchestrator | Saturday 27 September 2025 21:58:28 +0000 (0:00:00.206) 0:00:53.804 **** 2025-09-27 21:58:35.854853 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:35.854864 | orchestrator | 2025-09-27 21:58:35.854874 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:35.854885 | orchestrator | Saturday 27 September 2025 21:58:28 +0000 (0:00:00.204) 0:00:54.008 **** 2025-09-27 21:58:35.854897 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-27 21:58:35.854910 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-27 21:58:35.854935 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-27 21:58:35.854947 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-27 21:58:35.854959 | orchestrator | 2025-09-27 21:58:35.855162 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:35.855177 | orchestrator | Saturday 27 September 2025 21:58:29 +0000 (0:00:00.706) 0:00:54.715 **** 2025-09-27 21:58:35.855245 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:35.855271 | orchestrator | 2025-09-27 21:58:35.855283 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:35.855293 | orchestrator | Saturday 27 September 2025 21:58:29 +0000 (0:00:00.194) 0:00:54.909 **** 2025-09-27 21:58:35.855304 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:35.855332 | orchestrator | 2025-09-27 21:58:35.855344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:35.855354 | orchestrator | Saturday 27 September 2025 21:58:29 +0000 (0:00:00.201) 0:00:55.111 **** 2025-09-27 21:58:35.855365 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:35.855376 | orchestrator | 2025-09-27 21:58:35.855387 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:58:35.855397 | orchestrator | Saturday 27 September 2025 21:58:30 +0000 (0:00:00.212) 0:00:55.324 **** 2025-09-27 21:58:35.855408 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:35.855419 | orchestrator | 2025-09-27 21:58:35.855429 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-27 21:58:35.855440 | orchestrator | Saturday 27 September 2025 21:58:30 +0000 (0:00:00.199) 0:00:55.524 **** 2025-09-27 21:58:35.855451 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:35.855462 | orchestrator | 2025-09-27 21:58:35.855472 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-27 21:58:35.855483 | orchestrator | Saturday 27 September 2025 21:58:30 +0000 (0:00:00.345) 0:00:55.869 **** 2025-09-27 21:58:35.855493 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2625e84f-b704-594b-a79a-2de5db7d7d7c'}}) 2025-09-27 21:58:35.855504 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '30a62591-9a6e-5933-8bc7-7c2bee7235f5'}}) 2025-09-27 21:58:35.855527 | orchestrator | 2025-09-27 21:58:35.855538 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-27 21:58:35.855548 | orchestrator | Saturday 27 September 2025 21:58:30 +0000 (0:00:00.190) 0:00:56.060 **** 2025-09-27 21:58:35.855560 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'}) 2025-09-27 21:58:35.855572 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'}) 2025-09-27 21:58:35.855583 | orchestrator | 2025-09-27 21:58:35.855594 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-27 21:58:35.855621 | orchestrator | Saturday 27 September 2025 21:58:32 +0000 (0:00:01.921) 0:00:57.982 **** 2025-09-27 21:58:35.855633 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'})  2025-09-27 21:58:35.855645 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'})  2025-09-27 21:58:35.855656 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:35.855692 | orchestrator | 2025-09-27 21:58:35.855727 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-27 21:58:35.855751 | orchestrator | Saturday 27 September 2025 21:58:32 +0000 (0:00:00.158) 0:00:58.140 **** 2025-09-27 21:58:35.855774 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'}) 2025-09-27 21:58:35.855797 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'}) 2025-09-27 21:58:35.855821 | orchestrator | 2025-09-27 21:58:35.855833 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-27 21:58:35.855844 | orchestrator | Saturday 27 September 2025 21:58:34 +0000 (0:00:01.318) 0:00:59.459 **** 2025-09-27 21:58:35.855854 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'})  2025-09-27 21:58:35.855866 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'})  2025-09-27 21:58:35.855877 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:35.855888 | orchestrator | 2025-09-27 21:58:35.855899 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-27 21:58:35.855909 | orchestrator | Saturday 27 September 2025 21:58:34 +0000 (0:00:00.179) 0:00:59.639 **** 2025-09-27 21:58:35.855920 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:35.855931 | orchestrator | 2025-09-27 21:58:35.855942 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-27 21:58:35.855953 | orchestrator | Saturday 27 September 2025 21:58:34 +0000 (0:00:00.140) 0:00:59.779 **** 2025-09-27 21:58:35.855968 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'})  2025-09-27 21:58:35.855996 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'})  2025-09-27 21:58:35.856047 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:35.856071 | orchestrator | 2025-09-27 21:58:35.856089 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-27 21:58:35.856106 | orchestrator | Saturday 27 September 2025 21:58:34 +0000 (0:00:00.164) 0:00:59.944 **** 2025-09-27 21:58:35.856123 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:35.856151 | orchestrator | 2025-09-27 21:58:35.856169 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-27 21:58:35.856186 | orchestrator | Saturday 27 September 2025 21:58:34 +0000 (0:00:00.141) 0:01:00.086 **** 2025-09-27 21:58:35.856202 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'})  2025-09-27 21:58:35.856217 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'})  2025-09-27 21:58:35.856233 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:35.856249 | orchestrator | 2025-09-27 21:58:35.856264 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-27 21:58:35.856280 | orchestrator | Saturday 27 September 2025 21:58:34 +0000 (0:00:00.155) 0:01:00.241 **** 2025-09-27 21:58:35.856297 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:35.856314 | orchestrator | 2025-09-27 21:58:35.856330 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-27 21:58:35.856348 | orchestrator | Saturday 27 September 2025 21:58:35 +0000 (0:00:00.139) 0:01:00.380 **** 2025-09-27 21:58:35.856367 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'})  2025-09-27 21:58:35.856386 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'})  2025-09-27 21:58:35.856406 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:35.856425 | orchestrator | 2025-09-27 21:58:35.856444 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-27 21:58:35.856463 | orchestrator | Saturday 27 September 2025 21:58:35 +0000 (0:00:00.184) 0:01:00.565 **** 2025-09-27 21:58:35.856482 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:58:35.856502 | orchestrator | 2025-09-27 21:58:35.856543 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-27 21:58:35.856817 | orchestrator | Saturday 27 September 2025 21:58:35 +0000 (0:00:00.144) 0:01:00.709 **** 2025-09-27 21:58:35.856860 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'})  2025-09-27 21:58:41.628396 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'})  2025-09-27 21:58:41.628510 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.628528 | orchestrator | 2025-09-27 21:58:41.628541 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-27 21:58:41.628554 | orchestrator | Saturday 27 September 2025 21:58:35 +0000 (0:00:00.385) 0:01:01.095 **** 2025-09-27 21:58:41.628566 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'})  2025-09-27 21:58:41.628577 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'})  2025-09-27 21:58:41.628588 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.628600 | orchestrator | 2025-09-27 21:58:41.628612 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-27 21:58:41.628623 | orchestrator | Saturday 27 September 2025 21:58:36 +0000 (0:00:00.162) 0:01:01.258 **** 2025-09-27 21:58:41.628634 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'})  2025-09-27 21:58:41.628645 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'})  2025-09-27 21:58:41.628656 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.628692 | orchestrator | 2025-09-27 21:58:41.628704 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-27 21:58:41.628715 | orchestrator | Saturday 27 September 2025 21:58:36 +0000 (0:00:00.142) 0:01:01.401 **** 2025-09-27 21:58:41.628726 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.628737 | orchestrator | 2025-09-27 21:58:41.628748 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-27 21:58:41.628758 | orchestrator | Saturday 27 September 2025 21:58:36 +0000 (0:00:00.142) 0:01:01.543 **** 2025-09-27 21:58:41.628769 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.628780 | orchestrator | 2025-09-27 21:58:41.628791 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-27 21:58:41.628801 | orchestrator | Saturday 27 September 2025 21:58:36 +0000 (0:00:00.149) 0:01:01.692 **** 2025-09-27 21:58:41.628812 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.628823 | orchestrator | 2025-09-27 21:58:41.628833 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-27 21:58:41.628845 | orchestrator | Saturday 27 September 2025 21:58:36 +0000 (0:00:00.135) 0:01:01.828 **** 2025-09-27 21:58:41.628855 | orchestrator | ok: [testbed-node-5] => { 2025-09-27 21:58:41.628867 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-27 21:58:41.628878 | orchestrator | } 2025-09-27 21:58:41.628890 | orchestrator | 2025-09-27 21:58:41.628901 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-27 21:58:41.628911 | orchestrator | Saturday 27 September 2025 21:58:36 +0000 (0:00:00.154) 0:01:01.983 **** 2025-09-27 21:58:41.628922 | orchestrator | ok: [testbed-node-5] => { 2025-09-27 21:58:41.628933 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-27 21:58:41.628944 | orchestrator | } 2025-09-27 21:58:41.628955 | orchestrator | 2025-09-27 21:58:41.628965 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-27 21:58:41.628977 | orchestrator | Saturday 27 September 2025 21:58:36 +0000 (0:00:00.154) 0:01:02.137 **** 2025-09-27 21:58:41.628988 | orchestrator | ok: [testbed-node-5] => { 2025-09-27 21:58:41.628998 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-27 21:58:41.629010 | orchestrator | } 2025-09-27 21:58:41.629021 | orchestrator | 2025-09-27 21:58:41.629055 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-27 21:58:41.629067 | orchestrator | Saturday 27 September 2025 21:58:37 +0000 (0:00:00.124) 0:01:02.262 **** 2025-09-27 21:58:41.629077 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:58:41.629088 | orchestrator | 2025-09-27 21:58:41.629099 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-27 21:58:41.629109 | orchestrator | Saturday 27 September 2025 21:58:37 +0000 (0:00:00.516) 0:01:02.779 **** 2025-09-27 21:58:41.629120 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:58:41.629131 | orchestrator | 2025-09-27 21:58:41.629142 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-27 21:58:41.629152 | orchestrator | Saturday 27 September 2025 21:58:38 +0000 (0:00:00.538) 0:01:03.317 **** 2025-09-27 21:58:41.629163 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:58:41.629174 | orchestrator | 2025-09-27 21:58:41.629184 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-27 21:58:41.629195 | orchestrator | Saturday 27 September 2025 21:58:38 +0000 (0:00:00.515) 0:01:03.832 **** 2025-09-27 21:58:41.629206 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:58:41.629216 | orchestrator | 2025-09-27 21:58:41.629227 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-27 21:58:41.629238 | orchestrator | Saturday 27 September 2025 21:58:38 +0000 (0:00:00.273) 0:01:04.105 **** 2025-09-27 21:58:41.629249 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.629260 | orchestrator | 2025-09-27 21:58:41.629270 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-27 21:58:41.629281 | orchestrator | Saturday 27 September 2025 21:58:38 +0000 (0:00:00.102) 0:01:04.207 **** 2025-09-27 21:58:41.629302 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.629313 | orchestrator | 2025-09-27 21:58:41.629324 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-27 21:58:41.629334 | orchestrator | Saturday 27 September 2025 21:58:39 +0000 (0:00:00.115) 0:01:04.322 **** 2025-09-27 21:58:41.629345 | orchestrator | ok: [testbed-node-5] => { 2025-09-27 21:58:41.629356 | orchestrator |  "vgs_report": { 2025-09-27 21:58:41.629367 | orchestrator |  "vg": [] 2025-09-27 21:58:41.629397 | orchestrator |  } 2025-09-27 21:58:41.629409 | orchestrator | } 2025-09-27 21:58:41.629419 | orchestrator | 2025-09-27 21:58:41.629430 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-27 21:58:41.629441 | orchestrator | Saturday 27 September 2025 21:58:39 +0000 (0:00:00.129) 0:01:04.452 **** 2025-09-27 21:58:41.629452 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.629462 | orchestrator | 2025-09-27 21:58:41.629473 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-27 21:58:41.629484 | orchestrator | Saturday 27 September 2025 21:58:39 +0000 (0:00:00.128) 0:01:04.580 **** 2025-09-27 21:58:41.629494 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.629505 | orchestrator | 2025-09-27 21:58:41.629516 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-27 21:58:41.629527 | orchestrator | Saturday 27 September 2025 21:58:39 +0000 (0:00:00.128) 0:01:04.709 **** 2025-09-27 21:58:41.629537 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.629548 | orchestrator | 2025-09-27 21:58:41.629559 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-27 21:58:41.629570 | orchestrator | Saturday 27 September 2025 21:58:39 +0000 (0:00:00.128) 0:01:04.837 **** 2025-09-27 21:58:41.629580 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.629591 | orchestrator | 2025-09-27 21:58:41.629602 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-27 21:58:41.629632 | orchestrator | Saturday 27 September 2025 21:58:39 +0000 (0:00:00.148) 0:01:04.986 **** 2025-09-27 21:58:41.629643 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.629654 | orchestrator | 2025-09-27 21:58:41.629665 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-27 21:58:41.629676 | orchestrator | Saturday 27 September 2025 21:58:39 +0000 (0:00:00.131) 0:01:05.118 **** 2025-09-27 21:58:41.629686 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.629697 | orchestrator | 2025-09-27 21:58:41.629708 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-27 21:58:41.629718 | orchestrator | Saturday 27 September 2025 21:58:40 +0000 (0:00:00.136) 0:01:05.255 **** 2025-09-27 21:58:41.629729 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.629740 | orchestrator | 2025-09-27 21:58:41.629750 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-27 21:58:41.629761 | orchestrator | Saturday 27 September 2025 21:58:40 +0000 (0:00:00.128) 0:01:05.383 **** 2025-09-27 21:58:41.629772 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.629783 | orchestrator | 2025-09-27 21:58:41.629793 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-27 21:58:41.629804 | orchestrator | Saturday 27 September 2025 21:58:40 +0000 (0:00:00.127) 0:01:05.511 **** 2025-09-27 21:58:41.629815 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.629826 | orchestrator | 2025-09-27 21:58:41.629836 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-27 21:58:41.629852 | orchestrator | Saturday 27 September 2025 21:58:40 +0000 (0:00:00.278) 0:01:05.789 **** 2025-09-27 21:58:41.629863 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.629874 | orchestrator | 2025-09-27 21:58:41.629884 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-27 21:58:41.629895 | orchestrator | Saturday 27 September 2025 21:58:40 +0000 (0:00:00.125) 0:01:05.914 **** 2025-09-27 21:58:41.629906 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.629924 | orchestrator | 2025-09-27 21:58:41.629935 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-27 21:58:41.629945 | orchestrator | Saturday 27 September 2025 21:58:40 +0000 (0:00:00.136) 0:01:06.050 **** 2025-09-27 21:58:41.629956 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.629967 | orchestrator | 2025-09-27 21:58:41.629978 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-27 21:58:41.629988 | orchestrator | Saturday 27 September 2025 21:58:40 +0000 (0:00:00.123) 0:01:06.174 **** 2025-09-27 21:58:41.629999 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.630010 | orchestrator | 2025-09-27 21:58:41.630100 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-27 21:58:41.630112 | orchestrator | Saturday 27 September 2025 21:58:41 +0000 (0:00:00.129) 0:01:06.304 **** 2025-09-27 21:58:41.630123 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.630133 | orchestrator | 2025-09-27 21:58:41.630144 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-27 21:58:41.630155 | orchestrator | Saturday 27 September 2025 21:58:41 +0000 (0:00:00.130) 0:01:06.434 **** 2025-09-27 21:58:41.630166 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'})  2025-09-27 21:58:41.630177 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'})  2025-09-27 21:58:41.630188 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.630199 | orchestrator | 2025-09-27 21:58:41.630210 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-27 21:58:41.630221 | orchestrator | Saturday 27 September 2025 21:58:41 +0000 (0:00:00.146) 0:01:06.580 **** 2025-09-27 21:58:41.630231 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'})  2025-09-27 21:58:41.630242 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'})  2025-09-27 21:58:41.630253 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:41.630264 | orchestrator | 2025-09-27 21:58:41.630275 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-27 21:58:41.630286 | orchestrator | Saturday 27 September 2025 21:58:41 +0000 (0:00:00.147) 0:01:06.728 **** 2025-09-27 21:58:41.630304 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'})  2025-09-27 21:58:44.373597 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'})  2025-09-27 21:58:44.373714 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:44.373742 | orchestrator | 2025-09-27 21:58:44.373765 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-27 21:58:44.373787 | orchestrator | Saturday 27 September 2025 21:58:41 +0000 (0:00:00.144) 0:01:06.872 **** 2025-09-27 21:58:44.373808 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'})  2025-09-27 21:58:44.373828 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'})  2025-09-27 21:58:44.373840 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:44.373851 | orchestrator | 2025-09-27 21:58:44.373863 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-27 21:58:44.373874 | orchestrator | Saturday 27 September 2025 21:58:41 +0000 (0:00:00.144) 0:01:07.016 **** 2025-09-27 21:58:44.373885 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'})  2025-09-27 21:58:44.373930 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'})  2025-09-27 21:58:44.373951 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:44.373971 | orchestrator | 2025-09-27 21:58:44.373990 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-27 21:58:44.374010 | orchestrator | Saturday 27 September 2025 21:58:41 +0000 (0:00:00.146) 0:01:07.163 **** 2025-09-27 21:58:44.374107 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'})  2025-09-27 21:58:44.374130 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'})  2025-09-27 21:58:44.374151 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:44.374173 | orchestrator | 2025-09-27 21:58:44.374212 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-27 21:58:44.374233 | orchestrator | Saturday 27 September 2025 21:58:42 +0000 (0:00:00.141) 0:01:07.304 **** 2025-09-27 21:58:44.374253 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'})  2025-09-27 21:58:44.374275 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'})  2025-09-27 21:58:44.374296 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:44.374317 | orchestrator | 2025-09-27 21:58:44.374339 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-27 21:58:44.374361 | orchestrator | Saturday 27 September 2025 21:58:42 +0000 (0:00:00.291) 0:01:07.595 **** 2025-09-27 21:58:44.374383 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'})  2025-09-27 21:58:44.374405 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'})  2025-09-27 21:58:44.374426 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:44.374447 | orchestrator | 2025-09-27 21:58:44.374468 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-27 21:58:44.374489 | orchestrator | Saturday 27 September 2025 21:58:42 +0000 (0:00:00.146) 0:01:07.742 **** 2025-09-27 21:58:44.374510 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:58:44.374532 | orchestrator | 2025-09-27 21:58:44.374553 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-27 21:58:44.374573 | orchestrator | Saturday 27 September 2025 21:58:42 +0000 (0:00:00.503) 0:01:08.246 **** 2025-09-27 21:58:44.374594 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:58:44.374614 | orchestrator | 2025-09-27 21:58:44.374635 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-27 21:58:44.374674 | orchestrator | Saturday 27 September 2025 21:58:43 +0000 (0:00:00.478) 0:01:08.725 **** 2025-09-27 21:58:44.374695 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:58:44.374715 | orchestrator | 2025-09-27 21:58:44.374734 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-27 21:58:44.374754 | orchestrator | Saturday 27 September 2025 21:58:43 +0000 (0:00:00.132) 0:01:08.857 **** 2025-09-27 21:58:44.374774 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'vg_name': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'}) 2025-09-27 21:58:44.374795 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'vg_name': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'}) 2025-09-27 21:58:44.374814 | orchestrator | 2025-09-27 21:58:44.374834 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-27 21:58:44.374867 | orchestrator | Saturday 27 September 2025 21:58:43 +0000 (0:00:00.165) 0:01:09.023 **** 2025-09-27 21:58:44.374911 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'})  2025-09-27 21:58:44.374933 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'})  2025-09-27 21:58:44.374953 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:44.374973 | orchestrator | 2025-09-27 21:58:44.374993 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-27 21:58:44.375013 | orchestrator | Saturday 27 September 2025 21:58:43 +0000 (0:00:00.145) 0:01:09.169 **** 2025-09-27 21:58:44.375066 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'})  2025-09-27 21:58:44.375088 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'})  2025-09-27 21:58:44.375109 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:44.375128 | orchestrator | 2025-09-27 21:58:44.375148 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-27 21:58:44.375169 | orchestrator | Saturday 27 September 2025 21:58:44 +0000 (0:00:00.144) 0:01:09.313 **** 2025-09-27 21:58:44.375188 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'})  2025-09-27 21:58:44.375208 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'})  2025-09-27 21:58:44.375228 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:58:44.375248 | orchestrator | 2025-09-27 21:58:44.375268 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-27 21:58:44.375287 | orchestrator | Saturday 27 September 2025 21:58:44 +0000 (0:00:00.147) 0:01:09.460 **** 2025-09-27 21:58:44.375307 | orchestrator | ok: [testbed-node-5] => { 2025-09-27 21:58:44.375327 | orchestrator |  "lvm_report": { 2025-09-27 21:58:44.375347 | orchestrator |  "lv": [ 2025-09-27 21:58:44.375367 | orchestrator |  { 2025-09-27 21:58:44.375387 | orchestrator |  "lv_name": "osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c", 2025-09-27 21:58:44.375415 | orchestrator |  "vg_name": "ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c" 2025-09-27 21:58:44.375435 | orchestrator |  }, 2025-09-27 21:58:44.375455 | orchestrator |  { 2025-09-27 21:58:44.375475 | orchestrator |  "lv_name": "osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5", 2025-09-27 21:58:44.375495 | orchestrator |  "vg_name": "ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5" 2025-09-27 21:58:44.375515 | orchestrator |  } 2025-09-27 21:58:44.375535 | orchestrator |  ], 2025-09-27 21:58:44.375554 | orchestrator |  "pv": [ 2025-09-27 21:58:44.375574 | orchestrator |  { 2025-09-27 21:58:44.375594 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-27 21:58:44.375614 | orchestrator |  "vg_name": "ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c" 2025-09-27 21:58:44.375634 | orchestrator |  }, 2025-09-27 21:58:44.375653 | orchestrator |  { 2025-09-27 21:58:44.375673 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-27 21:58:44.375693 | orchestrator |  "vg_name": "ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5" 2025-09-27 21:58:44.375713 | orchestrator |  } 2025-09-27 21:58:44.375732 | orchestrator |  ] 2025-09-27 21:58:44.375752 | orchestrator |  } 2025-09-27 21:58:44.375772 | orchestrator | } 2025-09-27 21:58:44.375792 | orchestrator | 2025-09-27 21:58:44.375812 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:58:44.375845 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-27 21:58:44.375865 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-27 21:58:44.375885 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-27 21:58:44.375905 | orchestrator | 2025-09-27 21:58:44.375924 | orchestrator | 2025-09-27 21:58:44.375944 | orchestrator | 2025-09-27 21:58:44.375963 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:58:44.375983 | orchestrator | Saturday 27 September 2025 21:58:44 +0000 (0:00:00.133) 0:01:09.594 **** 2025-09-27 21:58:44.376003 | orchestrator | =============================================================================== 2025-09-27 21:58:44.376023 | orchestrator | Create block VGs -------------------------------------------------------- 5.77s 2025-09-27 21:58:44.376065 | orchestrator | Create block LVs -------------------------------------------------------- 4.07s 2025-09-27 21:58:44.376085 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.84s 2025-09-27 21:58:44.376105 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.57s 2025-09-27 21:58:44.376125 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.52s 2025-09-27 21:58:44.376144 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.52s 2025-09-27 21:58:44.376164 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.50s 2025-09-27 21:58:44.376184 | orchestrator | Add known partitions to the list of available block devices ------------- 1.39s 2025-09-27 21:58:44.376216 | orchestrator | Add known partitions to the list of available block devices ------------- 1.19s 2025-09-27 21:58:44.618347 | orchestrator | Add known links to the list of available block devices ------------------ 1.16s 2025-09-27 21:58:44.618479 | orchestrator | Add known links to the list of available block devices ------------------ 0.93s 2025-09-27 21:58:44.618505 | orchestrator | Print LVM report data --------------------------------------------------- 0.89s 2025-09-27 21:58:44.618526 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2025-09-27 21:58:44.618544 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.74s 2025-09-27 21:58:44.618563 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-09-27 21:58:44.618582 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.67s 2025-09-27 21:58:44.618601 | orchestrator | Get initial list of available block devices ----------------------------- 0.66s 2025-09-27 21:58:44.618618 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.64s 2025-09-27 21:58:44.618636 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-09-27 21:58:44.618654 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-09-27 21:58:56.716975 | orchestrator | 2025-09-27 21:58:56 | INFO  | Task 327bd818-fc46-4faa-a072-d217b011eb72 (facts) was prepared for execution. 2025-09-27 21:58:56.717158 | orchestrator | 2025-09-27 21:58:56 | INFO  | It takes a moment until task 327bd818-fc46-4faa-a072-d217b011eb72 (facts) has been started and output is visible here. 2025-09-27 21:59:08.110125 | orchestrator | 2025-09-27 21:59:08.110226 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-27 21:59:08.110240 | orchestrator | 2025-09-27 21:59:08.110247 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-27 21:59:08.110253 | orchestrator | Saturday 27 September 2025 21:59:00 +0000 (0:00:00.207) 0:00:00.207 **** 2025-09-27 21:59:08.110257 | orchestrator | ok: [testbed-manager] 2025-09-27 21:59:08.110263 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:59:08.110288 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:59:08.110292 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:59:08.110297 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:59:08.110301 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:59:08.110306 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:59:08.110310 | orchestrator | 2025-09-27 21:59:08.110314 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-27 21:59:08.110319 | orchestrator | Saturday 27 September 2025 21:59:01 +0000 (0:00:00.908) 0:00:01.115 **** 2025-09-27 21:59:08.110334 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:59:08.110340 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:08.110345 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:08.110349 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:08.110354 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:08.110358 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:59:08.110362 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:59:08.110367 | orchestrator | 2025-09-27 21:59:08.110371 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-27 21:59:08.110375 | orchestrator | 2025-09-27 21:59:08.110379 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-27 21:59:08.110384 | orchestrator | Saturday 27 September 2025 21:59:02 +0000 (0:00:01.076) 0:00:02.192 **** 2025-09-27 21:59:08.110388 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:59:08.110392 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:59:08.110396 | orchestrator | ok: [testbed-manager] 2025-09-27 21:59:08.110401 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:59:08.110405 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:59:08.110409 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:59:08.110413 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:59:08.110418 | orchestrator | 2025-09-27 21:59:08.110422 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-27 21:59:08.110426 | orchestrator | 2025-09-27 21:59:08.110431 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-27 21:59:08.110435 | orchestrator | Saturday 27 September 2025 21:59:07 +0000 (0:00:04.693) 0:00:06.885 **** 2025-09-27 21:59:08.110439 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:59:08.110444 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:08.110448 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:08.110452 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:08.110456 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:08.110461 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:59:08.110465 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:59:08.110469 | orchestrator | 2025-09-27 21:59:08.110473 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:59:08.110478 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:59:08.110484 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:59:08.110488 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:59:08.110493 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:59:08.110497 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:59:08.110501 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:59:08.110506 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:59:08.110515 | orchestrator | 2025-09-27 21:59:08.110520 | orchestrator | 2025-09-27 21:59:08.110524 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:59:08.110528 | orchestrator | Saturday 27 September 2025 21:59:07 +0000 (0:00:00.509) 0:00:07.395 **** 2025-09-27 21:59:08.110533 | orchestrator | =============================================================================== 2025-09-27 21:59:08.110537 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.69s 2025-09-27 21:59:08.110541 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.08s 2025-09-27 21:59:08.110546 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.91s 2025-09-27 21:59:08.110550 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-09-27 21:59:20.388351 | orchestrator | 2025-09-27 21:59:20 | INFO  | Task da563efa-86e2-493d-967e-aa5630bcfb43 (frr) was prepared for execution. 2025-09-27 21:59:20.388471 | orchestrator | 2025-09-27 21:59:20 | INFO  | It takes a moment until task da563efa-86e2-493d-967e-aa5630bcfb43 (frr) has been started and output is visible here. 2025-09-27 21:59:49.929528 | orchestrator | 2025-09-27 21:59:49.929626 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-27 21:59:49.929637 | orchestrator | 2025-09-27 21:59:49.929646 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-27 21:59:49.929654 | orchestrator | Saturday 27 September 2025 21:59:24 +0000 (0:00:00.237) 0:00:00.238 **** 2025-09-27 21:59:49.929662 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-27 21:59:49.929671 | orchestrator | 2025-09-27 21:59:49.929682 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-27 21:59:49.929694 | orchestrator | Saturday 27 September 2025 21:59:24 +0000 (0:00:00.225) 0:00:00.464 **** 2025-09-27 21:59:49.929713 | orchestrator | changed: [testbed-manager] 2025-09-27 21:59:49.929727 | orchestrator | 2025-09-27 21:59:49.929738 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-27 21:59:49.929750 | orchestrator | Saturday 27 September 2025 21:59:25 +0000 (0:00:01.113) 0:00:01.577 **** 2025-09-27 21:59:49.929760 | orchestrator | changed: [testbed-manager] 2025-09-27 21:59:49.929770 | orchestrator | 2025-09-27 21:59:49.929783 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-27 21:59:49.929793 | orchestrator | Saturday 27 September 2025 21:59:38 +0000 (0:00:12.701) 0:00:14.278 **** 2025-09-27 21:59:49.929805 | orchestrator | ok: [testbed-manager] 2025-09-27 21:59:49.929818 | orchestrator | 2025-09-27 21:59:49.929830 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-27 21:59:49.929841 | orchestrator | Saturday 27 September 2025 21:59:39 +0000 (0:00:01.228) 0:00:15.507 **** 2025-09-27 21:59:49.929853 | orchestrator | changed: [testbed-manager] 2025-09-27 21:59:49.929860 | orchestrator | 2025-09-27 21:59:49.929867 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-27 21:59:49.929874 | orchestrator | Saturday 27 September 2025 21:59:40 +0000 (0:00:00.907) 0:00:16.415 **** 2025-09-27 21:59:49.929881 | orchestrator | ok: [testbed-manager] 2025-09-27 21:59:49.929888 | orchestrator | 2025-09-27 21:59:49.929912 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-27 21:59:49.929921 | orchestrator | Saturday 27 September 2025 21:59:41 +0000 (0:00:01.155) 0:00:17.570 **** 2025-09-27 21:59:49.929928 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-27 21:59:49.929935 | orchestrator | 2025-09-27 21:59:49.929942 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-27 21:59:49.929950 | orchestrator | Saturday 27 September 2025 21:59:42 +0000 (0:00:00.794) 0:00:18.365 **** 2025-09-27 21:59:49.929957 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:59:49.929963 | orchestrator | 2025-09-27 21:59:49.929971 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-27 21:59:49.930000 | orchestrator | Saturday 27 September 2025 21:59:42 +0000 (0:00:00.160) 0:00:18.526 **** 2025-09-27 21:59:49.930007 | orchestrator | changed: [testbed-manager] 2025-09-27 21:59:49.930066 | orchestrator | 2025-09-27 21:59:49.930074 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-27 21:59:49.930081 | orchestrator | Saturday 27 September 2025 21:59:43 +0000 (0:00:00.962) 0:00:19.488 **** 2025-09-27 21:59:49.930089 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-27 21:59:49.930097 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-27 21:59:49.930136 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-27 21:59:49.930144 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-27 21:59:49.930153 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-27 21:59:49.930161 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-27 21:59:49.930168 | orchestrator | 2025-09-27 21:59:49.930175 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-27 21:59:49.930183 | orchestrator | Saturday 27 September 2025 21:59:46 +0000 (0:00:03.117) 0:00:22.606 **** 2025-09-27 21:59:49.930191 | orchestrator | ok: [testbed-manager] 2025-09-27 21:59:49.930198 | orchestrator | 2025-09-27 21:59:49.930207 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-27 21:59:49.930214 | orchestrator | Saturday 27 September 2025 21:59:48 +0000 (0:00:01.412) 0:00:24.019 **** 2025-09-27 21:59:49.930221 | orchestrator | changed: [testbed-manager] 2025-09-27 21:59:49.930229 | orchestrator | 2025-09-27 21:59:49.930236 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:59:49.930243 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:59:49.930250 | orchestrator | 2025-09-27 21:59:49.930256 | orchestrator | 2025-09-27 21:59:49.930263 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:59:49.930269 | orchestrator | Saturday 27 September 2025 21:59:49 +0000 (0:00:01.368) 0:00:25.387 **** 2025-09-27 21:59:49.930276 | orchestrator | =============================================================================== 2025-09-27 21:59:49.930282 | orchestrator | osism.services.frr : Install frr package ------------------------------- 12.70s 2025-09-27 21:59:49.930289 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.12s 2025-09-27 21:59:49.930296 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.41s 2025-09-27 21:59:49.930302 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.37s 2025-09-27 21:59:49.930324 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.23s 2025-09-27 21:59:49.930331 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.16s 2025-09-27 21:59:49.930338 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.11s 2025-09-27 21:59:49.930344 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.96s 2025-09-27 21:59:49.930351 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.91s 2025-09-27 21:59:49.930357 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.79s 2025-09-27 21:59:49.930364 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2025-09-27 21:59:49.930371 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.16s 2025-09-27 21:59:50.215649 | orchestrator | 2025-09-27 21:59:50.218079 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Sep 27 21:59:50 UTC 2025 2025-09-27 21:59:50.218144 | orchestrator | 2025-09-27 21:59:52.059789 | orchestrator | 2025-09-27 21:59:52 | INFO  | Collection nutshell is prepared for execution 2025-09-27 21:59:52.059889 | orchestrator | 2025-09-27 21:59:52 | INFO  | D [0] - dotfiles 2025-09-27 22:00:02.210276 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [0] - homer 2025-09-27 22:00:02.210387 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [0] - netdata 2025-09-27 22:00:02.210401 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [0] - openstackclient 2025-09-27 22:00:02.210422 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [0] - phpmyadmin 2025-09-27 22:00:02.210432 | orchestrator | 2025-09-27 22:00:02 | INFO  | A [0] - common 2025-09-27 22:00:02.214539 | orchestrator | 2025-09-27 22:00:02 | INFO  | A [1] -- loadbalancer 2025-09-27 22:00:02.214634 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [2] --- opensearch 2025-09-27 22:00:02.214817 | orchestrator | 2025-09-27 22:00:02 | INFO  | A [2] --- mariadb-ng 2025-09-27 22:00:02.214864 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [3] ---- horizon 2025-09-27 22:00:02.215144 | orchestrator | 2025-09-27 22:00:02 | INFO  | A [3] ---- keystone 2025-09-27 22:00:02.215163 | orchestrator | 2025-09-27 22:00:02 | INFO  | A [4] ----- neutron 2025-09-27 22:00:02.215355 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [5] ------ wait-for-nova 2025-09-27 22:00:02.215545 | orchestrator | 2025-09-27 22:00:02 | INFO  | A [5] ------ octavia 2025-09-27 22:00:02.216966 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [4] ----- barbican 2025-09-27 22:00:02.216989 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [4] ----- designate 2025-09-27 22:00:02.217204 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [4] ----- ironic 2025-09-27 22:00:02.217435 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [4] ----- placement 2025-09-27 22:00:02.217823 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [4] ----- magnum 2025-09-27 22:00:02.218614 | orchestrator | 2025-09-27 22:00:02 | INFO  | A [1] -- openvswitch 2025-09-27 22:00:02.218635 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [2] --- ovn 2025-09-27 22:00:02.219256 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [1] -- memcached 2025-09-27 22:00:02.219274 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [1] -- redis 2025-09-27 22:00:02.219822 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [1] -- rabbitmq-ng 2025-09-27 22:00:02.220201 | orchestrator | 2025-09-27 22:00:02 | INFO  | A [0] - kubernetes 2025-09-27 22:00:02.222921 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [1] -- kubeconfig 2025-09-27 22:00:02.222943 | orchestrator | 2025-09-27 22:00:02 | INFO  | A [1] -- copy-kubeconfig 2025-09-27 22:00:02.223228 | orchestrator | 2025-09-27 22:00:02 | INFO  | A [0] - ceph 2025-09-27 22:00:02.225613 | orchestrator | 2025-09-27 22:00:02 | INFO  | A [1] -- ceph-pools 2025-09-27 22:00:02.225633 | orchestrator | 2025-09-27 22:00:02 | INFO  | A [2] --- copy-ceph-keys 2025-09-27 22:00:02.225910 | orchestrator | 2025-09-27 22:00:02 | INFO  | A [3] ---- cephclient 2025-09-27 22:00:02.226109 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-27 22:00:02.226311 | orchestrator | 2025-09-27 22:00:02 | INFO  | A [4] ----- wait-for-keystone 2025-09-27 22:00:02.226526 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-27 22:00:02.226542 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [5] ------ glance 2025-09-27 22:00:02.226716 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [5] ------ cinder 2025-09-27 22:00:02.226908 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [5] ------ nova 2025-09-27 22:00:02.227421 | orchestrator | 2025-09-27 22:00:02 | INFO  | A [4] ----- prometheus 2025-09-27 22:00:02.227504 | orchestrator | 2025-09-27 22:00:02 | INFO  | D [5] ------ grafana 2025-09-27 22:00:02.431472 | orchestrator | 2025-09-27 22:00:02 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-27 22:00:02.431578 | orchestrator | 2025-09-27 22:00:02 | INFO  | Tasks are running in the background 2025-09-27 22:00:05.594960 | orchestrator | 2025-09-27 22:00:05 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-27 22:00:07.699973 | orchestrator | 2025-09-27 22:00:07 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:00:07.700077 | orchestrator | 2025-09-27 22:00:07 | INFO  | Task ac1d5f6b-957b-44e9-97e1-a9ffc31bcf68 is in state STARTED 2025-09-27 22:00:07.702753 | orchestrator | 2025-09-27 22:00:07 | INFO  | Task aa97e447-ba77-4922-9084-02a34dd4f1db is in state STARTED 2025-09-27 22:00:07.703379 | orchestrator | 2025-09-27 22:00:07 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:00:07.703876 | orchestrator | 2025-09-27 22:00:07 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:00:07.704335 | orchestrator | 2025-09-27 22:00:07 | INFO  | Task 8169469d-cc6c-4e29-bcbb-34af786a4834 is in state STARTED 2025-09-27 22:00:07.704819 | orchestrator | 2025-09-27 22:00:07 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:00:07.704917 | orchestrator | 2025-09-27 22:00:07 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:00:10.758378 | orchestrator | 2025-09-27 22:00:10 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:00:10.758476 | orchestrator | 2025-09-27 22:00:10 | INFO  | Task ac1d5f6b-957b-44e9-97e1-a9ffc31bcf68 is in state STARTED 2025-09-27 22:00:10.758488 | orchestrator | 2025-09-27 22:00:10 | INFO  | Task aa97e447-ba77-4922-9084-02a34dd4f1db is in state STARTED 2025-09-27 22:00:10.758495 | orchestrator | 2025-09-27 22:00:10 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:00:10.760761 | orchestrator | 2025-09-27 22:00:10 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:00:10.760781 | orchestrator | 2025-09-27 22:00:10 | INFO  | Task 8169469d-cc6c-4e29-bcbb-34af786a4834 is in state STARTED 2025-09-27 22:00:10.760788 | orchestrator | 2025-09-27 22:00:10 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:00:10.760795 | orchestrator | 2025-09-27 22:00:10 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:00:13.795427 | orchestrator | 2025-09-27 22:00:13 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:00:13.795516 | orchestrator | 2025-09-27 22:00:13 | INFO  | Task ac1d5f6b-957b-44e9-97e1-a9ffc31bcf68 is in state STARTED 2025-09-27 22:00:13.795524 | orchestrator | 2025-09-27 22:00:13 | INFO  | Task aa97e447-ba77-4922-9084-02a34dd4f1db is in state STARTED 2025-09-27 22:00:13.795531 | orchestrator | 2025-09-27 22:00:13 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:00:13.796644 | orchestrator | 2025-09-27 22:00:13 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:00:13.796658 | orchestrator | 2025-09-27 22:00:13 | INFO  | Task 8169469d-cc6c-4e29-bcbb-34af786a4834 is in state STARTED 2025-09-27 22:00:13.796665 | orchestrator | 2025-09-27 22:00:13 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:00:13.796671 | orchestrator | 2025-09-27 22:00:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:00:16.831283 | orchestrator | 2025-09-27 22:00:16 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:00:16.838913 | orchestrator | 2025-09-27 22:00:16 | INFO  | Task ac1d5f6b-957b-44e9-97e1-a9ffc31bcf68 is in state STARTED 2025-09-27 22:00:16.838965 | orchestrator | 2025-09-27 22:00:16 | INFO  | Task aa97e447-ba77-4922-9084-02a34dd4f1db is in state STARTED 2025-09-27 22:00:16.838978 | orchestrator | 2025-09-27 22:00:16 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:00:16.838990 | orchestrator | 2025-09-27 22:00:16 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:00:16.839001 | orchestrator | 2025-09-27 22:00:16 | INFO  | Task 8169469d-cc6c-4e29-bcbb-34af786a4834 is in state STARTED 2025-09-27 22:00:16.839012 | orchestrator | 2025-09-27 22:00:16 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:00:16.839024 | orchestrator | 2025-09-27 22:00:16 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:00:19.978486 | orchestrator | 2025-09-27 22:00:19 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:00:19.978787 | orchestrator | 2025-09-27 22:00:19 | INFO  | Task ac1d5f6b-957b-44e9-97e1-a9ffc31bcf68 is in state STARTED 2025-09-27 22:00:19.979361 | orchestrator | 2025-09-27 22:00:19 | INFO  | Task aa97e447-ba77-4922-9084-02a34dd4f1db is in state STARTED 2025-09-27 22:00:19.980179 | orchestrator | 2025-09-27 22:00:19 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:00:19.980984 | orchestrator | 2025-09-27 22:00:19 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:00:19.983234 | orchestrator | 2025-09-27 22:00:19 | INFO  | Task 8169469d-cc6c-4e29-bcbb-34af786a4834 is in state STARTED 2025-09-27 22:00:19.985500 | orchestrator | 2025-09-27 22:00:19 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:00:19.985557 | orchestrator | 2025-09-27 22:00:19 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:00:23.155036 | orchestrator | 2025-09-27 22:00:23 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:00:23.155224 | orchestrator | 2025-09-27 22:00:23 | INFO  | Task ac1d5f6b-957b-44e9-97e1-a9ffc31bcf68 is in state STARTED 2025-09-27 22:00:23.155247 | orchestrator | 2025-09-27 22:00:23 | INFO  | Task aa97e447-ba77-4922-9084-02a34dd4f1db is in state STARTED 2025-09-27 22:00:23.155260 | orchestrator | 2025-09-27 22:00:23 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:00:23.155271 | orchestrator | 2025-09-27 22:00:23 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:00:23.155282 | orchestrator | 2025-09-27 22:00:23 | INFO  | Task 8169469d-cc6c-4e29-bcbb-34af786a4834 is in state STARTED 2025-09-27 22:00:23.155293 | orchestrator | 2025-09-27 22:00:23 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:00:23.155305 | orchestrator | 2025-09-27 22:00:23 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:00:26.261324 | orchestrator | 2025-09-27 22:00:26 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:00:26.261439 | orchestrator | 2025-09-27 22:00:26 | INFO  | Task ac1d5f6b-957b-44e9-97e1-a9ffc31bcf68 is in state STARTED 2025-09-27 22:00:26.261455 | orchestrator | 2025-09-27 22:00:26 | INFO  | Task aa97e447-ba77-4922-9084-02a34dd4f1db is in state STARTED 2025-09-27 22:00:26.261467 | orchestrator | 2025-09-27 22:00:26 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:00:26.261506 | orchestrator | 2025-09-27 22:00:26 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:00:26.261518 | orchestrator | 2025-09-27 22:00:26 | INFO  | Task 8169469d-cc6c-4e29-bcbb-34af786a4834 is in state STARTED 2025-09-27 22:00:26.261529 | orchestrator | 2025-09-27 22:00:26 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:00:26.261540 | orchestrator | 2025-09-27 22:00:26 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:00:29.272122 | orchestrator | 2025-09-27 22:00:29.272258 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-27 22:00:29.272348 | orchestrator | 2025-09-27 22:00:29.272358 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-27 22:00:29.272364 | orchestrator | Saturday 27 September 2025 22:00:13 +0000 (0:00:00.730) 0:00:00.730 **** 2025-09-27 22:00:29.272370 | orchestrator | changed: [testbed-manager] 2025-09-27 22:00:29.272377 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:00:29.272383 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:00:29.272389 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:00:29.272395 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:00:29.272400 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:00:29.272406 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:00:29.272412 | orchestrator | 2025-09-27 22:00:29.272417 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-27 22:00:29.272423 | orchestrator | Saturday 27 September 2025 22:00:18 +0000 (0:00:04.887) 0:00:05.617 **** 2025-09-27 22:00:29.272430 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-27 22:00:29.272437 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-27 22:00:29.272443 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-27 22:00:29.272448 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-27 22:00:29.272454 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-27 22:00:29.272460 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-27 22:00:29.272466 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-27 22:00:29.272471 | orchestrator | 2025-09-27 22:00:29.272477 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-27 22:00:29.272484 | orchestrator | Saturday 27 September 2025 22:00:20 +0000 (0:00:01.335) 0:00:06.953 **** 2025-09-27 22:00:29.272493 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-27 22:00:19.398016', 'end': '2025-09-27 22:00:19.406020', 'delta': '0:00:00.008004', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-27 22:00:29.272510 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-27 22:00:20.024319', 'end': '2025-09-27 22:00:20.031548', 'delta': '0:00:00.007229', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-27 22:00:29.272538 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-27 22:00:19.684725', 'end': '2025-09-27 22:00:19.690772', 'delta': '0:00:00.006047', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-27 22:00:29.272575 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-27 22:00:19.755125', 'end': '2025-09-27 22:00:19.763882', 'delta': '0:00:00.008757', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-27 22:00:29.272585 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-27 22:00:19.544268', 'end': '2025-09-27 22:00:19.549431', 'delta': '0:00:00.005163', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-27 22:00:29.272605 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-27 22:00:19.395691', 'end': '2025-09-27 22:00:19.404359', 'delta': '0:00:00.008668', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-27 22:00:29.272966 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-27 22:00:19.534041', 'end': '2025-09-27 22:00:19.542256', 'delta': '0:00:00.008215', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-27 22:00:29.273006 | orchestrator | 2025-09-27 22:00:29.273014 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-27 22:00:29.273020 | orchestrator | Saturday 27 September 2025 22:00:21 +0000 (0:00:01.463) 0:00:08.416 **** 2025-09-27 22:00:29.273027 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-27 22:00:29.273033 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-27 22:00:29.273039 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-27 22:00:29.273044 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-27 22:00:29.273050 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-27 22:00:29.273055 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-27 22:00:29.273061 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-27 22:00:29.273067 | orchestrator | 2025-09-27 22:00:29.273073 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-27 22:00:29.273078 | orchestrator | Saturday 27 September 2025 22:00:24 +0000 (0:00:02.792) 0:00:11.209 **** 2025-09-27 22:00:29.273085 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-27 22:00:29.273090 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-27 22:00:29.273096 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-27 22:00:29.273102 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-27 22:00:29.273107 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-27 22:00:29.273113 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-27 22:00:29.273119 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-27 22:00:29.273125 | orchestrator | 2025-09-27 22:00:29.273131 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:00:29.273183 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:00:29.273192 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:00:29.273198 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:00:29.273204 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:00:29.273209 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:00:29.273215 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:00:29.273221 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:00:29.273227 | orchestrator | 2025-09-27 22:00:29.273232 | orchestrator | 2025-09-27 22:00:29.273238 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:00:29.273244 | orchestrator | Saturday 27 September 2025 22:00:26 +0000 (0:00:02.522) 0:00:13.732 **** 2025-09-27 22:00:29.273250 | orchestrator | =============================================================================== 2025-09-27 22:00:29.273256 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.89s 2025-09-27 22:00:29.273262 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.79s 2025-09-27 22:00:29.273273 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.52s 2025-09-27 22:00:29.273279 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.46s 2025-09-27 22:00:29.273285 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.34s 2025-09-27 22:00:29.273291 | orchestrator | 2025-09-27 22:00:29 | INFO  | Task dc3d34db-4796-4038-b5c3-fa5919989636 is in state STARTED 2025-09-27 22:00:29.273297 | orchestrator | 2025-09-27 22:00:29 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:00:29.273303 | orchestrator | 2025-09-27 22:00:29 | INFO  | Task ac1d5f6b-957b-44e9-97e1-a9ffc31bcf68 is in state SUCCESS 2025-09-27 22:00:29.273308 | orchestrator | 2025-09-27 22:00:29 | INFO  | Task aa97e447-ba77-4922-9084-02a34dd4f1db is in state STARTED 2025-09-27 22:00:29.273314 | orchestrator | 2025-09-27 22:00:29 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:00:29.273323 | orchestrator | 2025-09-27 22:00:29 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:00:29.273862 | orchestrator | 2025-09-27 22:00:29 | INFO  | Task 8169469d-cc6c-4e29-bcbb-34af786a4834 is in state STARTED 2025-09-27 22:00:29.274263 | orchestrator | 2025-09-27 22:00:29 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:00:29.274284 | orchestrator | 2025-09-27 22:00:29 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:00:32.455360 | orchestrator | 2025-09-27 22:00:32 | INFO  | Task dc3d34db-4796-4038-b5c3-fa5919989636 is in state STARTED 2025-09-27 22:00:32.455467 | orchestrator | 2025-09-27 22:00:32 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:00:32.455479 | orchestrator | 2025-09-27 22:00:32 | INFO  | Task aa97e447-ba77-4922-9084-02a34dd4f1db is in state STARTED 2025-09-27 22:00:32.455489 | orchestrator | 2025-09-27 22:00:32 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:00:32.455498 | orchestrator | 2025-09-27 22:00:32 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:00:32.455506 | orchestrator | 2025-09-27 22:00:32 | INFO  | Task 8169469d-cc6c-4e29-bcbb-34af786a4834 is in state STARTED 2025-09-27 22:00:32.455515 | orchestrator | 2025-09-27 22:00:32 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:00:32.455525 | orchestrator | 2025-09-27 22:00:32 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:00:35.495043 | orchestrator | 2025-09-27 22:00:35 | INFO  | Task dc3d34db-4796-4038-b5c3-fa5919989636 is in state STARTED 2025-09-27 22:00:35.495130 | orchestrator | 2025-09-27 22:00:35 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:00:35.495139 | orchestrator | 2025-09-27 22:00:35 | INFO  | Task aa97e447-ba77-4922-9084-02a34dd4f1db is in state STARTED 2025-09-27 22:00:35.495146 | orchestrator | 2025-09-27 22:00:35 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:00:35.495185 | orchestrator | 2025-09-27 22:00:35 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:00:35.495192 | orchestrator | 2025-09-27 22:00:35 | INFO  | Task 8169469d-cc6c-4e29-bcbb-34af786a4834 is in state STARTED 2025-09-27 22:00:35.495199 | orchestrator | 2025-09-27 22:00:35 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:00:35.495205 | orchestrator | 2025-09-27 22:00:35 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:00:38.574716 | orchestrator | 2025-09-27 22:00:38 | INFO  | Task dc3d34db-4796-4038-b5c3-fa5919989636 is in state STARTED 2025-09-27 22:00:38.574860 | orchestrator | 2025-09-27 22:00:38 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:00:38.575515 | orchestrator | 2025-09-27 22:00:38 | INFO  | Task aa97e447-ba77-4922-9084-02a34dd4f1db is in state STARTED 2025-09-27 22:00:38.575538 | orchestrator | 2025-09-27 22:00:38 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:00:38.581129 | orchestrator | 2025-09-27 22:00:38 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:00:38.588211 | orchestrator | 2025-09-27 22:00:38 | INFO  | Task 8169469d-cc6c-4e29-bcbb-34af786a4834 is in state STARTED 2025-09-27 22:00:38.588258 | orchestrator | 2025-09-27 22:00:38 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:00:38.588268 | orchestrator | 2025-09-27 22:00:38 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:00:41.641029 | orchestrator | 2025-09-27 22:00:41 | INFO  | Task dc3d34db-4796-4038-b5c3-fa5919989636 is in state STARTED 2025-09-27 22:00:41.717975 | orchestrator | 2025-09-27 22:00:41 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:00:41.718137 | orchestrator | 2025-09-27 22:00:41 | INFO  | Task aa97e447-ba77-4922-9084-02a34dd4f1db is in state STARTED 2025-09-27 22:00:41.718174 | orchestrator | 2025-09-27 22:00:41 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:00:41.718187 | orchestrator | 2025-09-27 22:00:41 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:00:41.718198 | orchestrator | 2025-09-27 22:00:41 | INFO  | Task 8169469d-cc6c-4e29-bcbb-34af786a4834 is in state STARTED 2025-09-27 22:00:41.718210 | orchestrator | 2025-09-27 22:00:41 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:00:41.718242 | orchestrator | 2025-09-27 22:00:41 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:00:44.738833 | orchestrator | 2025-09-27 22:00:44 | INFO  | Task dc3d34db-4796-4038-b5c3-fa5919989636 is in state STARTED 2025-09-27 22:00:44.738943 | orchestrator | 2025-09-27 22:00:44 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:00:44.738957 | orchestrator | 2025-09-27 22:00:44 | INFO  | Task aa97e447-ba77-4922-9084-02a34dd4f1db is in state STARTED 2025-09-27 22:00:44.739299 | orchestrator | 2025-09-27 22:00:44 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:00:44.746547 | orchestrator | 2025-09-27 22:00:44 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:00:44.750316 | orchestrator | 2025-09-27 22:00:44 | INFO  | Task 8169469d-cc6c-4e29-bcbb-34af786a4834 is in state STARTED 2025-09-27 22:00:44.753728 | orchestrator | 2025-09-27 22:00:44 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:00:44.753791 | orchestrator | 2025-09-27 22:00:44 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:00:47.882091 | orchestrator | 2025-09-27 22:00:47 | INFO  | Task dc3d34db-4796-4038-b5c3-fa5919989636 is in state STARTED 2025-09-27 22:00:47.882268 | orchestrator | 2025-09-27 22:00:47 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:00:47.882297 | orchestrator | 2025-09-27 22:00:47 | INFO  | Task aa97e447-ba77-4922-9084-02a34dd4f1db is in state STARTED 2025-09-27 22:00:47.882318 | orchestrator | 2025-09-27 22:00:47 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:00:47.882336 | orchestrator | 2025-09-27 22:00:47 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:00:47.882396 | orchestrator | 2025-09-27 22:00:47 | INFO  | Task 8169469d-cc6c-4e29-bcbb-34af786a4834 is in state STARTED 2025-09-27 22:00:47.882419 | orchestrator | 2025-09-27 22:00:47 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:00:47.882432 | orchestrator | 2025-09-27 22:00:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:00:50.922576 | orchestrator | 2025-09-27 22:00:50 | INFO  | Task dc3d34db-4796-4038-b5c3-fa5919989636 is in state STARTED 2025-09-27 22:00:50.930640 | orchestrator | 2025-09-27 22:00:50 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:00:50.978896 | orchestrator | 2025-09-27 22:00:50 | INFO  | Task aa97e447-ba77-4922-9084-02a34dd4f1db is in state STARTED 2025-09-27 22:00:50.991079 | orchestrator | 2025-09-27 22:00:50 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:00:50.995358 | orchestrator | 2025-09-27 22:00:50 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:00:50.997673 | orchestrator | 2025-09-27 22:00:50 | INFO  | Task 8169469d-cc6c-4e29-bcbb-34af786a4834 is in state STARTED 2025-09-27 22:00:50.998121 | orchestrator | 2025-09-27 22:00:50 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:00:50.998155 | orchestrator | 2025-09-27 22:00:50 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:00:54.072748 | orchestrator | 2025-09-27 22:00:54 | INFO  | Task dc3d34db-4796-4038-b5c3-fa5919989636 is in state STARTED 2025-09-27 22:00:54.073913 | orchestrator | 2025-09-27 22:00:54 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:00:54.076318 | orchestrator | 2025-09-27 22:00:54 | INFO  | Task aa97e447-ba77-4922-9084-02a34dd4f1db is in state STARTED 2025-09-27 22:00:54.077774 | orchestrator | 2025-09-27 22:00:54 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:00:54.079378 | orchestrator | 2025-09-27 22:00:54 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:00:54.080329 | orchestrator | 2025-09-27 22:00:54 | INFO  | Task 8169469d-cc6c-4e29-bcbb-34af786a4834 is in state SUCCESS 2025-09-27 22:00:54.081637 | orchestrator | 2025-09-27 22:00:54 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:00:54.081974 | orchestrator | 2025-09-27 22:00:54 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:00:57.128672 | orchestrator | 2025-09-27 22:00:57 | INFO  | Task dc3d34db-4796-4038-b5c3-fa5919989636 is in state STARTED 2025-09-27 22:00:57.128789 | orchestrator | 2025-09-27 22:00:57 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:00:57.128829 | orchestrator | 2025-09-27 22:00:57 | INFO  | Task aa97e447-ba77-4922-9084-02a34dd4f1db is in state STARTED 2025-09-27 22:00:57.128842 | orchestrator | 2025-09-27 22:00:57 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:00:57.128854 | orchestrator | 2025-09-27 22:00:57 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:00:57.128866 | orchestrator | 2025-09-27 22:00:57 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:00:57.128878 | orchestrator | 2025-09-27 22:00:57 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:01:00.157114 | orchestrator | 2025-09-27 22:01:00 | INFO  | Task dc3d34db-4796-4038-b5c3-fa5919989636 is in state STARTED 2025-09-27 22:01:00.157383 | orchestrator | 2025-09-27 22:01:00 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:01:00.159670 | orchestrator | 2025-09-27 22:01:00 | INFO  | Task aa97e447-ba77-4922-9084-02a34dd4f1db is in state SUCCESS 2025-09-27 22:01:00.159858 | orchestrator | 2025-09-27 22:01:00 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:01:00.161776 | orchestrator | 2025-09-27 22:01:00 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:01:00.165382 | orchestrator | 2025-09-27 22:01:00 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:01:00.166149 | orchestrator | 2025-09-27 22:01:00 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:01:03.262226 | orchestrator | 2025-09-27 22:01:03 | INFO  | Task dc3d34db-4796-4038-b5c3-fa5919989636 is in state STARTED 2025-09-27 22:01:03.284329 | orchestrator | 2025-09-27 22:01:03 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:01:03.296010 | orchestrator | 2025-09-27 22:01:03 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:01:03.312953 | orchestrator | 2025-09-27 22:01:03 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:01:03.315313 | orchestrator | 2025-09-27 22:01:03 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:01:03.315374 | orchestrator | 2025-09-27 22:01:03 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:01:06.448143 | orchestrator | 2025-09-27 22:01:06 | INFO  | Task dc3d34db-4796-4038-b5c3-fa5919989636 is in state STARTED 2025-09-27 22:01:06.449567 | orchestrator | 2025-09-27 22:01:06 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:01:06.450167 | orchestrator | 2025-09-27 22:01:06 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:01:06.450652 | orchestrator | 2025-09-27 22:01:06 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:01:06.452585 | orchestrator | 2025-09-27 22:01:06 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:01:06.452633 | orchestrator | 2025-09-27 22:01:06 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:01:09.481746 | orchestrator | 2025-09-27 22:01:09 | INFO  | Task dc3d34db-4796-4038-b5c3-fa5919989636 is in state STARTED 2025-09-27 22:01:09.483418 | orchestrator | 2025-09-27 22:01:09 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:01:09.483839 | orchestrator | 2025-09-27 22:01:09 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:01:09.484451 | orchestrator | 2025-09-27 22:01:09 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:01:09.485060 | orchestrator | 2025-09-27 22:01:09 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:01:09.485088 | orchestrator | 2025-09-27 22:01:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:01:12.549624 | orchestrator | 2025-09-27 22:01:12 | INFO  | Task dc3d34db-4796-4038-b5c3-fa5919989636 is in state STARTED 2025-09-27 22:01:12.555122 | orchestrator | 2025-09-27 22:01:12 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:01:12.558423 | orchestrator | 2025-09-27 22:01:12 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:01:12.565066 | orchestrator | 2025-09-27 22:01:12 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:01:12.565810 | orchestrator | 2025-09-27 22:01:12 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:01:12.566369 | orchestrator | 2025-09-27 22:01:12 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:01:15.605399 | orchestrator | 2025-09-27 22:01:15 | INFO  | Task dc3d34db-4796-4038-b5c3-fa5919989636 is in state STARTED 2025-09-27 22:01:15.606223 | orchestrator | 2025-09-27 22:01:15 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:01:15.607522 | orchestrator | 2025-09-27 22:01:15 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:01:15.608438 | orchestrator | 2025-09-27 22:01:15 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:01:15.609944 | orchestrator | 2025-09-27 22:01:15 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:01:15.610041 | orchestrator | 2025-09-27 22:01:15 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:01:18.675524 | orchestrator | 2025-09-27 22:01:18 | INFO  | Task dc3d34db-4796-4038-b5c3-fa5919989636 is in state STARTED 2025-09-27 22:01:18.675634 | orchestrator | 2025-09-27 22:01:18 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:01:18.675928 | orchestrator | 2025-09-27 22:01:18 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state STARTED 2025-09-27 22:01:18.677363 | orchestrator | 2025-09-27 22:01:18 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:01:18.679172 | orchestrator | 2025-09-27 22:01:18 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:01:18.679345 | orchestrator | 2025-09-27 22:01:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:01:21.728273 | orchestrator | 2025-09-27 22:01:21 | INFO  | Task dc3d34db-4796-4038-b5c3-fa5919989636 is in state STARTED 2025-09-27 22:01:21.728383 | orchestrator | 2025-09-27 22:01:21 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:01:21.729384 | orchestrator | 2025-09-27 22:01:21 | INFO  | Task a08a743e-7ec5-4c73-b21f-df50f5616a41 is in state SUCCESS 2025-09-27 22:01:21.731636 | orchestrator | 2025-09-27 22:01:21.731685 | orchestrator | 2025-09-27 22:01:21.731694 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-27 22:01:21.731703 | orchestrator | 2025-09-27 22:01:21.731710 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-27 22:01:21.731719 | orchestrator | Saturday 27 September 2025 22:00:15 +0000 (0:00:00.654) 0:00:00.654 **** 2025-09-27 22:01:21.731727 | orchestrator | ok: [testbed-manager] => { 2025-09-27 22:01:21.731736 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-27 22:01:21.731745 | orchestrator | } 2025-09-27 22:01:21.731753 | orchestrator | 2025-09-27 22:01:21.731761 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-27 22:01:21.731768 | orchestrator | Saturday 27 September 2025 22:00:15 +0000 (0:00:00.794) 0:00:01.449 **** 2025-09-27 22:01:21.731776 | orchestrator | ok: [testbed-manager] 2025-09-27 22:01:21.731785 | orchestrator | 2025-09-27 22:01:21.731792 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-27 22:01:21.731799 | orchestrator | Saturday 27 September 2025 22:00:16 +0000 (0:00:00.979) 0:00:02.429 **** 2025-09-27 22:01:21.731807 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-27 22:01:21.731814 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-27 22:01:21.731822 | orchestrator | 2025-09-27 22:01:21.731829 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-27 22:01:21.731837 | orchestrator | Saturday 27 September 2025 22:00:17 +0000 (0:00:00.967) 0:00:03.397 **** 2025-09-27 22:01:21.731844 | orchestrator | changed: [testbed-manager] 2025-09-27 22:01:21.731869 | orchestrator | 2025-09-27 22:01:21.731877 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-27 22:01:21.731884 | orchestrator | Saturday 27 September 2025 22:00:19 +0000 (0:00:01.786) 0:00:05.183 **** 2025-09-27 22:01:21.731891 | orchestrator | changed: [testbed-manager] 2025-09-27 22:01:21.731898 | orchestrator | 2025-09-27 22:01:21.731906 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-27 22:01:21.731913 | orchestrator | Saturday 27 September 2025 22:00:21 +0000 (0:00:01.729) 0:00:06.913 **** 2025-09-27 22:01:21.731920 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-27 22:01:21.731928 | orchestrator | ok: [testbed-manager] 2025-09-27 22:01:21.731935 | orchestrator | 2025-09-27 22:01:21.731942 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-27 22:01:21.731949 | orchestrator | Saturday 27 September 2025 22:00:47 +0000 (0:00:26.384) 0:00:33.297 **** 2025-09-27 22:01:21.731981 | orchestrator | changed: [testbed-manager] 2025-09-27 22:01:21.731988 | orchestrator | 2025-09-27 22:01:21.731995 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:01:21.732003 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:01:21.732012 | orchestrator | 2025-09-27 22:01:21.732020 | orchestrator | 2025-09-27 22:01:21.732027 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:01:21.732034 | orchestrator | Saturday 27 September 2025 22:00:51 +0000 (0:00:03.238) 0:00:36.536 **** 2025-09-27 22:01:21.732042 | orchestrator | =============================================================================== 2025-09-27 22:01:21.732049 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.38s 2025-09-27 22:01:21.732056 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.24s 2025-09-27 22:01:21.732064 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.79s 2025-09-27 22:01:21.732071 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.73s 2025-09-27 22:01:21.732079 | orchestrator | osism.services.homer : Create traefik external network ------------------ 0.98s 2025-09-27 22:01:21.732086 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.97s 2025-09-27 22:01:21.732094 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.79s 2025-09-27 22:01:21.732101 | orchestrator | 2025-09-27 22:01:21.732109 | orchestrator | 2025-09-27 22:01:21.732116 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-27 22:01:21.732123 | orchestrator | 2025-09-27 22:01:21.732130 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-27 22:01:21.732138 | orchestrator | Saturday 27 September 2025 22:00:13 +0000 (0:00:00.587) 0:00:00.587 **** 2025-09-27 22:01:21.732145 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-27 22:01:21.732154 | orchestrator | 2025-09-27 22:01:21.732161 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-27 22:01:21.732168 | orchestrator | Saturday 27 September 2025 22:00:14 +0000 (0:00:00.931) 0:00:01.518 **** 2025-09-27 22:01:21.732176 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-27 22:01:21.732186 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-27 22:01:21.732221 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-27 22:01:21.732234 | orchestrator | 2025-09-27 22:01:21.732248 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-27 22:01:21.732259 | orchestrator | Saturday 27 September 2025 22:00:16 +0000 (0:00:01.879) 0:00:03.397 **** 2025-09-27 22:01:21.732270 | orchestrator | changed: [testbed-manager] 2025-09-27 22:01:21.732281 | orchestrator | 2025-09-27 22:01:21.732294 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-27 22:01:21.732314 | orchestrator | Saturday 27 September 2025 22:00:17 +0000 (0:00:01.385) 0:00:04.783 **** 2025-09-27 22:01:21.732344 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-27 22:01:21.732357 | orchestrator | ok: [testbed-manager] 2025-09-27 22:01:21.732369 | orchestrator | 2025-09-27 22:01:21.732380 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-27 22:01:21.732391 | orchestrator | Saturday 27 September 2025 22:00:50 +0000 (0:00:32.356) 0:00:37.139 **** 2025-09-27 22:01:21.732403 | orchestrator | changed: [testbed-manager] 2025-09-27 22:01:21.732415 | orchestrator | 2025-09-27 22:01:21.732427 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-27 22:01:21.732439 | orchestrator | Saturday 27 September 2025 22:00:51 +0000 (0:00:01.427) 0:00:38.566 **** 2025-09-27 22:01:21.732452 | orchestrator | ok: [testbed-manager] 2025-09-27 22:01:21.732465 | orchestrator | 2025-09-27 22:01:21.732479 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-27 22:01:21.732491 | orchestrator | Saturday 27 September 2025 22:00:52 +0000 (0:00:00.543) 0:00:39.110 **** 2025-09-27 22:01:21.732504 | orchestrator | changed: [testbed-manager] 2025-09-27 22:01:21.732517 | orchestrator | 2025-09-27 22:01:21.732530 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-27 22:01:21.732543 | orchestrator | Saturday 27 September 2025 22:00:55 +0000 (0:00:02.737) 0:00:41.847 **** 2025-09-27 22:01:21.732556 | orchestrator | changed: [testbed-manager] 2025-09-27 22:01:21.732565 | orchestrator | 2025-09-27 22:01:21.732574 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-27 22:01:21.732616 | orchestrator | Saturday 27 September 2025 22:00:56 +0000 (0:00:01.532) 0:00:43.379 **** 2025-09-27 22:01:21.732624 | orchestrator | changed: [testbed-manager] 2025-09-27 22:01:21.732631 | orchestrator | 2025-09-27 22:01:21.732641 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-27 22:01:21.732653 | orchestrator | Saturday 27 September 2025 22:00:57 +0000 (0:00:00.665) 0:00:44.045 **** 2025-09-27 22:01:21.732665 | orchestrator | ok: [testbed-manager] 2025-09-27 22:01:21.732676 | orchestrator | 2025-09-27 22:01:21.732687 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:01:21.732699 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:01:21.732711 | orchestrator | 2025-09-27 22:01:21.732723 | orchestrator | 2025-09-27 22:01:21.732735 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:01:21.732748 | orchestrator | Saturday 27 September 2025 22:00:57 +0000 (0:00:00.506) 0:00:44.552 **** 2025-09-27 22:01:21.732760 | orchestrator | =============================================================================== 2025-09-27 22:01:21.732772 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.36s 2025-09-27 22:01:21.732784 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.74s 2025-09-27 22:01:21.732796 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.88s 2025-09-27 22:01:21.732808 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.53s 2025-09-27 22:01:21.732819 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.43s 2025-09-27 22:01:21.732830 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.39s 2025-09-27 22:01:21.732842 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.93s 2025-09-27 22:01:21.732860 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.67s 2025-09-27 22:01:21.732872 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.54s 2025-09-27 22:01:21.732885 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.51s 2025-09-27 22:01:21.732907 | orchestrator | 2025-09-27 22:01:21.732920 | orchestrator | 2025-09-27 22:01:21.732932 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:01:21.732943 | orchestrator | 2025-09-27 22:01:21.732954 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:01:21.732966 | orchestrator | Saturday 27 September 2025 22:00:15 +0000 (0:00:00.979) 0:00:00.979 **** 2025-09-27 22:01:21.732974 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-27 22:01:21.732981 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-27 22:01:21.732988 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-27 22:01:21.732996 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-27 22:01:21.733003 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-27 22:01:21.733010 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-27 22:01:21.733017 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-27 22:01:21.733024 | orchestrator | 2025-09-27 22:01:21.733031 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-27 22:01:21.733039 | orchestrator | 2025-09-27 22:01:21.733046 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-27 22:01:21.733053 | orchestrator | Saturday 27 September 2025 22:00:17 +0000 (0:00:01.462) 0:00:02.442 **** 2025-09-27 22:01:21.733073 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:01:21.733084 | orchestrator | 2025-09-27 22:01:21.733091 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-27 22:01:21.733098 | orchestrator | Saturday 27 September 2025 22:00:19 +0000 (0:00:02.431) 0:00:04.873 **** 2025-09-27 22:01:21.733105 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:01:21.733113 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:01:21.733120 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:01:21.733127 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:01:21.733134 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:01:21.733150 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:01:21.733158 | orchestrator | ok: [testbed-manager] 2025-09-27 22:01:21.733165 | orchestrator | 2025-09-27 22:01:21.733173 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-27 22:01:21.733180 | orchestrator | Saturday 27 September 2025 22:00:21 +0000 (0:00:02.092) 0:00:06.966 **** 2025-09-27 22:01:21.733215 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:01:21.733226 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:01:21.733234 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:01:21.733241 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:01:21.733248 | orchestrator | ok: [testbed-manager] 2025-09-27 22:01:21.733255 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:01:21.733262 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:01:21.733269 | orchestrator | 2025-09-27 22:01:21.733277 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-27 22:01:21.733284 | orchestrator | Saturday 27 September 2025 22:00:24 +0000 (0:00:03.318) 0:00:10.285 **** 2025-09-27 22:01:21.733291 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:01:21.733299 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:01:21.733306 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:01:21.733313 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:01:21.733320 | orchestrator | changed: [testbed-manager] 2025-09-27 22:01:21.733327 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:01:21.733334 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:01:21.733341 | orchestrator | 2025-09-27 22:01:21.733349 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-27 22:01:21.733356 | orchestrator | Saturday 27 September 2025 22:00:27 +0000 (0:00:02.090) 0:00:12.376 **** 2025-09-27 22:01:21.733363 | orchestrator | changed: [testbed-manager] 2025-09-27 22:01:21.733395 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:01:21.733403 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:01:21.733410 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:01:21.733417 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:01:21.733424 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:01:21.733431 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:01:21.733438 | orchestrator | 2025-09-27 22:01:21.733445 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-27 22:01:21.733452 | orchestrator | Saturday 27 September 2025 22:00:37 +0000 (0:00:10.205) 0:00:22.581 **** 2025-09-27 22:01:21.733459 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:01:21.733466 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:01:21.733473 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:01:21.733480 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:01:21.733487 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:01:21.733494 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:01:21.733501 | orchestrator | changed: [testbed-manager] 2025-09-27 22:01:21.733508 | orchestrator | 2025-09-27 22:01:21.733515 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-27 22:01:21.733523 | orchestrator | Saturday 27 September 2025 22:01:00 +0000 (0:00:23.294) 0:00:45.875 **** 2025-09-27 22:01:21.733531 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:01:21.733540 | orchestrator | 2025-09-27 22:01:21.733547 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-27 22:01:21.733554 | orchestrator | Saturday 27 September 2025 22:01:01 +0000 (0:00:01.229) 0:00:47.105 **** 2025-09-27 22:01:21.733561 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-27 22:01:21.733573 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-27 22:01:21.733581 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-27 22:01:21.733588 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-27 22:01:21.733595 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-27 22:01:21.733602 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-27 22:01:21.733609 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-27 22:01:21.733616 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-27 22:01:21.733623 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-27 22:01:21.733630 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-27 22:01:21.733637 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-27 22:01:21.733644 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-27 22:01:21.733651 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-27 22:01:21.733658 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-27 22:01:21.733665 | orchestrator | 2025-09-27 22:01:21.733672 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-27 22:01:21.733680 | orchestrator | Saturday 27 September 2025 22:01:07 +0000 (0:00:05.770) 0:00:52.875 **** 2025-09-27 22:01:21.733687 | orchestrator | ok: [testbed-manager] 2025-09-27 22:01:21.733694 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:01:21.733701 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:01:21.733708 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:01:21.733715 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:01:21.733723 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:01:21.733730 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:01:21.733737 | orchestrator | 2025-09-27 22:01:21.733744 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-27 22:01:21.733751 | orchestrator | Saturday 27 September 2025 22:01:08 +0000 (0:00:00.998) 0:00:53.874 **** 2025-09-27 22:01:21.733765 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:01:21.733772 | orchestrator | changed: [testbed-manager] 2025-09-27 22:01:21.733779 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:01:21.733786 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:01:21.733794 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:01:21.733801 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:01:21.733808 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:01:21.733815 | orchestrator | 2025-09-27 22:01:21.733822 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-27 22:01:21.733834 | orchestrator | Saturday 27 September 2025 22:01:10 +0000 (0:00:01.587) 0:00:55.461 **** 2025-09-27 22:01:21.733842 | orchestrator | ok: [testbed-manager] 2025-09-27 22:01:21.733849 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:01:21.733856 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:01:21.733863 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:01:21.733870 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:01:21.733877 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:01:21.733884 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:01:21.733891 | orchestrator | 2025-09-27 22:01:21.733899 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-27 22:01:21.733906 | orchestrator | Saturday 27 September 2025 22:01:11 +0000 (0:00:01.279) 0:00:56.741 **** 2025-09-27 22:01:21.733913 | orchestrator | ok: [testbed-manager] 2025-09-27 22:01:21.733920 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:01:21.733927 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:01:21.733934 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:01:21.733941 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:01:21.733948 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:01:21.733955 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:01:21.733962 | orchestrator | 2025-09-27 22:01:21.733969 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-27 22:01:21.733976 | orchestrator | Saturday 27 September 2025 22:01:13 +0000 (0:00:01.894) 0:00:58.635 **** 2025-09-27 22:01:21.733984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-27 22:01:21.733992 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:01:21.734000 | orchestrator | 2025-09-27 22:01:21.734007 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-27 22:01:21.734068 | orchestrator | Saturday 27 September 2025 22:01:14 +0000 (0:00:01.298) 0:00:59.934 **** 2025-09-27 22:01:21.734078 | orchestrator | changed: [testbed-manager] 2025-09-27 22:01:21.734085 | orchestrator | 2025-09-27 22:01:21.734092 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-27 22:01:21.734099 | orchestrator | Saturday 27 September 2025 22:01:16 +0000 (0:00:01.474) 0:01:01.408 **** 2025-09-27 22:01:21.734106 | orchestrator | changed: [testbed-manager] 2025-09-27 22:01:21.734114 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:01:21.734121 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:01:21.734128 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:01:21.734135 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:01:21.734142 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:01:21.734149 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:01:21.734156 | orchestrator | 2025-09-27 22:01:21.734163 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:01:21.734171 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:01:21.734178 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:01:21.734185 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:01:21.734262 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:01:21.734270 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:01:21.734278 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:01:21.734285 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:01:21.734293 | orchestrator | 2025-09-27 22:01:21.734300 | orchestrator | 2025-09-27 22:01:21.734307 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:01:21.734314 | orchestrator | Saturday 27 September 2025 22:01:20 +0000 (0:00:04.125) 0:01:05.534 **** 2025-09-27 22:01:21.734322 | orchestrator | =============================================================================== 2025-09-27 22:01:21.734329 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 23.29s 2025-09-27 22:01:21.734336 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.21s 2025-09-27 22:01:21.734344 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.77s 2025-09-27 22:01:21.734351 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 4.13s 2025-09-27 22:01:21.734358 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.32s 2025-09-27 22:01:21.734365 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.43s 2025-09-27 22:01:21.734372 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.09s 2025-09-27 22:01:21.734380 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.09s 2025-09-27 22:01:21.734387 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.89s 2025-09-27 22:01:21.734394 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.59s 2025-09-27 22:01:21.734401 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.47s 2025-09-27 22:01:21.734414 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.46s 2025-09-27 22:01:21.734421 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.30s 2025-09-27 22:01:21.734428 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.28s 2025-09-27 22:01:21.734435 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.23s 2025-09-27 22:01:21.734443 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.00s 2025-09-27 22:01:21.735531 | orchestrator | 2025-09-27 22:01:21 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:01:21.741071 | orchestrator | 2025-09-27 22:01:21 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:01:21.741117 | orchestrator | 2025-09-27 22:01:21 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:01:24.782414 | orchestrator | 2025-09-27 22:01:24 | INFO  | Task dc3d34db-4796-4038-b5c3-fa5919989636 is in state SUCCESS 2025-09-27 22:01:24.783941 | orchestrator | 2025-09-27 22:01:24 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:01:24.784972 | orchestrator | 2025-09-27 22:01:24 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:01:24.786688 | orchestrator | 2025-09-27 22:01:24 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:01:24.786757 | orchestrator | 2025-09-27 22:01:24 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:01:27.815558 | orchestrator | 2025-09-27 22:01:27 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:01:27.816780 | orchestrator | 2025-09-27 22:01:27 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:01:27.817726 | orchestrator | 2025-09-27 22:01:27 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:01:27.817779 | orchestrator | 2025-09-27 22:01:27 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:01:30.866132 | orchestrator | 2025-09-27 22:01:30 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:01:30.867751 | orchestrator | 2025-09-27 22:01:30 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:01:30.869360 | orchestrator | 2025-09-27 22:01:30 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:01:30.869548 | orchestrator | 2025-09-27 22:01:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:01:33.910635 | orchestrator | 2025-09-27 22:01:33 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:01:33.911430 | orchestrator | 2025-09-27 22:01:33 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:01:33.913017 | orchestrator | 2025-09-27 22:01:33 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:01:33.913590 | orchestrator | 2025-09-27 22:01:33 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:01:36.950545 | orchestrator | 2025-09-27 22:01:36 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:01:36.951670 | orchestrator | 2025-09-27 22:01:36 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:01:36.953744 | orchestrator | 2025-09-27 22:01:36 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:01:36.953785 | orchestrator | 2025-09-27 22:01:36 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:01:40.006851 | orchestrator | 2025-09-27 22:01:40 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:01:40.007118 | orchestrator | 2025-09-27 22:01:40 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:01:40.009458 | orchestrator | 2025-09-27 22:01:40 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:01:40.009505 | orchestrator | 2025-09-27 22:01:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:01:43.043825 | orchestrator | 2025-09-27 22:01:43 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:01:43.045321 | orchestrator | 2025-09-27 22:01:43 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:01:43.046786 | orchestrator | 2025-09-27 22:01:43 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:01:43.047026 | orchestrator | 2025-09-27 22:01:43 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:01:46.074503 | orchestrator | 2025-09-27 22:01:46 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:01:46.076412 | orchestrator | 2025-09-27 22:01:46 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:01:46.078402 | orchestrator | 2025-09-27 22:01:46 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:01:46.078501 | orchestrator | 2025-09-27 22:01:46 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:01:49.105912 | orchestrator | 2025-09-27 22:01:49 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:01:49.106437 | orchestrator | 2025-09-27 22:01:49 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:01:49.107303 | orchestrator | 2025-09-27 22:01:49 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:01:49.107403 | orchestrator | 2025-09-27 22:01:49 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:01:52.144206 | orchestrator | 2025-09-27 22:01:52 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:01:52.145441 | orchestrator | 2025-09-27 22:01:52 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:01:52.146406 | orchestrator | 2025-09-27 22:01:52 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:01:52.146949 | orchestrator | 2025-09-27 22:01:52 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:01:55.190801 | orchestrator | 2025-09-27 22:01:55 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:01:55.191816 | orchestrator | 2025-09-27 22:01:55 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:01:55.192404 | orchestrator | 2025-09-27 22:01:55 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:01:55.192430 | orchestrator | 2025-09-27 22:01:55 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:01:58.231060 | orchestrator | 2025-09-27 22:01:58 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:01:58.231896 | orchestrator | 2025-09-27 22:01:58 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:01:58.233326 | orchestrator | 2025-09-27 22:01:58 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:01:58.233365 | orchestrator | 2025-09-27 22:01:58 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:02:01.270770 | orchestrator | 2025-09-27 22:02:01 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:02:01.273333 | orchestrator | 2025-09-27 22:02:01 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:02:01.274792 | orchestrator | 2025-09-27 22:02:01 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:02:01.274824 | orchestrator | 2025-09-27 22:02:01 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:02:04.321366 | orchestrator | 2025-09-27 22:02:04 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:02:04.323306 | orchestrator | 2025-09-27 22:02:04 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:02:04.326085 | orchestrator | 2025-09-27 22:02:04 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:02:04.326124 | orchestrator | 2025-09-27 22:02:04 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:02:07.364513 | orchestrator | 2025-09-27 22:02:07 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:02:07.364758 | orchestrator | 2025-09-27 22:02:07 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:02:07.366324 | orchestrator | 2025-09-27 22:02:07 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:02:07.366451 | orchestrator | 2025-09-27 22:02:07 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:02:10.398400 | orchestrator | 2025-09-27 22:02:10 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:02:10.399342 | orchestrator | 2025-09-27 22:02:10 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:02:10.400251 | orchestrator | 2025-09-27 22:02:10 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:02:10.400310 | orchestrator | 2025-09-27 22:02:10 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:02:13.437082 | orchestrator | 2025-09-27 22:02:13 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:02:13.438742 | orchestrator | 2025-09-27 22:02:13 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:02:13.439748 | orchestrator | 2025-09-27 22:02:13 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:02:13.439785 | orchestrator | 2025-09-27 22:02:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:02:16.475190 | orchestrator | 2025-09-27 22:02:16 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:02:16.475789 | orchestrator | 2025-09-27 22:02:16 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state STARTED 2025-09-27 22:02:16.477580 | orchestrator | 2025-09-27 22:02:16 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:02:16.477610 | orchestrator | 2025-09-27 22:02:16 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:02:19.522113 | orchestrator | 2025-09-27 22:02:19 | INFO  | Task ddeffc42-33e7-4b3c-a2d3-9e7c5aa3b768 is in state STARTED 2025-09-27 22:02:19.522222 | orchestrator | 2025-09-27 22:02:19 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state STARTED 2025-09-27 22:02:19.524322 | orchestrator | 2025-09-27 22:02:19 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:02:19.527317 | orchestrator | 2025-09-27 22:02:19 | INFO  | Task 91c76508-a76a-4383-b0ef-212056d4a3a9 is in state SUCCESS 2025-09-27 22:02:19.530095 | orchestrator | 2025-09-27 22:02:19.530151 | orchestrator | 2025-09-27 22:02:19.530164 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-27 22:02:19.530176 | orchestrator | 2025-09-27 22:02:19.530187 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-27 22:02:19.530199 | orchestrator | Saturday 27 September 2025 22:00:31 +0000 (0:00:00.212) 0:00:00.212 **** 2025-09-27 22:02:19.530211 | orchestrator | ok: [testbed-manager] 2025-09-27 22:02:19.530223 | orchestrator | 2025-09-27 22:02:19.530234 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-27 22:02:19.530267 | orchestrator | Saturday 27 September 2025 22:00:32 +0000 (0:00:00.864) 0:00:01.077 **** 2025-09-27 22:02:19.530279 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-27 22:02:19.530290 | orchestrator | 2025-09-27 22:02:19.530302 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-27 22:02:19.530313 | orchestrator | Saturday 27 September 2025 22:00:33 +0000 (0:00:00.598) 0:00:01.675 **** 2025-09-27 22:02:19.530324 | orchestrator | changed: [testbed-manager] 2025-09-27 22:02:19.530335 | orchestrator | 2025-09-27 22:02:19.530346 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-27 22:02:19.530357 | orchestrator | Saturday 27 September 2025 22:00:34 +0000 (0:00:01.100) 0:00:02.775 **** 2025-09-27 22:02:19.530368 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-27 22:02:19.530379 | orchestrator | ok: [testbed-manager] 2025-09-27 22:02:19.530390 | orchestrator | 2025-09-27 22:02:19.530401 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-27 22:02:19.530412 | orchestrator | Saturday 27 September 2025 22:01:17 +0000 (0:00:42.995) 0:00:45.770 **** 2025-09-27 22:02:19.530423 | orchestrator | changed: [testbed-manager] 2025-09-27 22:02:19.530433 | orchestrator | 2025-09-27 22:02:19.530467 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:02:19.530478 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:02:19.530491 | orchestrator | 2025-09-27 22:02:19.530501 | orchestrator | 2025-09-27 22:02:19.530512 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:02:19.530523 | orchestrator | Saturday 27 September 2025 22:01:21 +0000 (0:00:04.055) 0:00:49.825 **** 2025-09-27 22:02:19.530533 | orchestrator | =============================================================================== 2025-09-27 22:02:19.530544 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 43.00s 2025-09-27 22:02:19.530555 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.06s 2025-09-27 22:02:19.530566 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.10s 2025-09-27 22:02:19.530576 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.86s 2025-09-27 22:02:19.530587 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.60s 2025-09-27 22:02:19.530598 | orchestrator | 2025-09-27 22:02:19.530609 | orchestrator | 2025-09-27 22:02:19.530620 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-27 22:02:19.530632 | orchestrator | 2025-09-27 22:02:19.530643 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-27 22:02:19.530655 | orchestrator | Saturday 27 September 2025 22:00:07 +0000 (0:00:00.214) 0:00:00.214 **** 2025-09-27 22:02:19.530667 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:02:19.530682 | orchestrator | 2025-09-27 22:02:19.530695 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-27 22:02:19.530707 | orchestrator | Saturday 27 September 2025 22:00:08 +0000 (0:00:01.024) 0:00:01.239 **** 2025-09-27 22:02:19.530718 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-27 22:02:19.530731 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-27 22:02:19.530742 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-27 22:02:19.530755 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-27 22:02:19.530767 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-27 22:02:19.530778 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-27 22:02:19.530789 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-27 22:02:19.530800 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-27 22:02:19.530811 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-27 22:02:19.530823 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-27 22:02:19.530835 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-27 22:02:19.530846 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-27 22:02:19.530858 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-27 22:02:19.530869 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-27 22:02:19.530881 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-27 22:02:19.530892 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-27 22:02:19.530917 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-27 22:02:19.530936 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-27 22:02:19.530945 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-27 22:02:19.530955 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-27 22:02:19.530966 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-27 22:02:19.530976 | orchestrator | 2025-09-27 22:02:19.530986 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-27 22:02:19.530997 | orchestrator | Saturday 27 September 2025 22:00:11 +0000 (0:00:03.870) 0:00:05.110 **** 2025-09-27 22:02:19.531021 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:02:19.531033 | orchestrator | 2025-09-27 22:02:19.531043 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-27 22:02:19.531054 | orchestrator | Saturday 27 September 2025 22:00:13 +0000 (0:00:01.237) 0:00:06.348 **** 2025-09-27 22:02:19.531073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.531089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.531101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.531113 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.531124 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.531136 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.531165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.531181 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.531193 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.531205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.531216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.531227 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.531272 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.531311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.531324 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.531341 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.531354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.531366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.531378 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.531391 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.531403 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.531421 | orchestrator | 2025-09-27 22:02:19.531433 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-27 22:02:19.531445 | orchestrator | Saturday 27 September 2025 22:00:17 +0000 (0:00:04.586) 0:00:10.934 **** 2025-09-27 22:02:19.531463 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 22:02:19.531476 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.531491 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.531503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 22:02:19.531516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.531530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.531542 | orchestrator | skipping: [testbed-manager] 2025-09-27 22:02:19.531553 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:02:19.531565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 22:02:19.531583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.531607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.531619 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:02:19.531630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 22:02:19.531645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.531657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.531667 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:02:19.531679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 22:02:19.531692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.531710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.531721 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:02:19.531732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 22:02:19.531750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.531762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.531820 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:02:19.531832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 22:02:19.531844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.531855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.531894 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:02:19.531905 | orchestrator | 2025-09-27 22:02:19.531916 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-27 22:02:19.531928 | orchestrator | Saturday 27 September 2025 22:00:19 +0000 (0:00:01.281) 0:00:12.215 **** 2025-09-27 22:02:19.531966 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 22:02:19.531977 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.531995 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.532006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 22:02:19.532017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.532029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.532040 | orchestrator | skipping: [testbed-manager] 2025-09-27 22:02:19.532050 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:02:19.532061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 22:02:19.532079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.532090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 22:02:19.532106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.532117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.532134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.532149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 22:02:19.532160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.532177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.532188 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:02:19.532199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 22:02:19.532210 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:02:19.532220 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:02:19.532232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.532306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.532318 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:02:19.532329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 22:02:19.532345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.532357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.532375 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:02:19.532386 | orchestrator | 2025-09-27 22:02:19.532397 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-27 22:02:19.532407 | orchestrator | Saturday 27 September 2025 22:00:21 +0000 (0:00:02.952) 0:00:15.168 **** 2025-09-27 22:02:19.532419 | orchestrator | skipping: [testbed-manager] 2025-09-27 22:02:19.532430 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:02:19.532441 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:02:19.532452 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:02:19.532464 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:02:19.532475 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:02:19.532484 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:02:19.532492 | orchestrator | 2025-09-27 22:02:19.532502 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-27 22:02:19.532512 | orchestrator | Saturday 27 September 2025 22:00:23 +0000 (0:00:01.576) 0:00:16.745 **** 2025-09-27 22:02:19.532522 | orchestrator | skipping: [testbed-manager] 2025-09-27 22:02:19.532532 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:02:19.532542 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:02:19.532552 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:02:19.532561 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:02:19.532571 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:02:19.532581 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:02:19.532590 | orchestrator | 2025-09-27 22:02:19.532600 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-27 22:02:19.532611 | orchestrator | Saturday 27 September 2025 22:00:25 +0000 (0:00:01.680) 0:00:18.426 **** 2025-09-27 22:02:19.532621 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.532631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.532647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.532657 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.532676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.532686 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.532697 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.532706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.532716 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.532725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.532742 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.532752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.532769 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.532780 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.532791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.532801 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.532812 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.532827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.532838 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.532855 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.532868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.532878 | orchestrator | 2025-09-27 22:02:19.532888 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-27 22:02:19.532898 | orchestrator | Saturday 27 September 2025 22:00:32 +0000 (0:00:06.954) 0:00:25.380 **** 2025-09-27 22:02:19.532906 | orchestrator | [WARNING]: Skipped 2025-09-27 22:02:19.532917 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-27 22:02:19.532928 | orchestrator | to this access issue: 2025-09-27 22:02:19.532937 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-27 22:02:19.532946 | orchestrator | directory 2025-09-27 22:02:19.532954 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-27 22:02:19.532964 | orchestrator | 2025-09-27 22:02:19.532974 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-27 22:02:19.532984 | orchestrator | Saturday 27 September 2025 22:00:33 +0000 (0:00:01.087) 0:00:26.468 **** 2025-09-27 22:02:19.532994 | orchestrator | [WARNING]: Skipped 2025-09-27 22:02:19.533004 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-27 22:02:19.533013 | orchestrator | to this access issue: 2025-09-27 22:02:19.533022 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-27 22:02:19.533030 | orchestrator | directory 2025-09-27 22:02:19.533041 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-27 22:02:19.533049 | orchestrator | 2025-09-27 22:02:19.533058 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-27 22:02:19.533067 | orchestrator | Saturday 27 September 2025 22:00:34 +0000 (0:00:01.237) 0:00:27.705 **** 2025-09-27 22:02:19.533078 | orchestrator | [WARNING]: Skipped 2025-09-27 22:02:19.533088 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-27 22:02:19.533098 | orchestrator | to this access issue: 2025-09-27 22:02:19.533108 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-27 22:02:19.533117 | orchestrator | directory 2025-09-27 22:02:19.533126 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-27 22:02:19.533136 | orchestrator | 2025-09-27 22:02:19.533145 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-27 22:02:19.533154 | orchestrator | Saturday 27 September 2025 22:00:35 +0000 (0:00:00.797) 0:00:28.503 **** 2025-09-27 22:02:19.533163 | orchestrator | [WARNING]: Skipped 2025-09-27 22:02:19.533172 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-27 22:02:19.533181 | orchestrator | to this access issue: 2025-09-27 22:02:19.533190 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-27 22:02:19.533199 | orchestrator | directory 2025-09-27 22:02:19.533209 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-27 22:02:19.533219 | orchestrator | 2025-09-27 22:02:19.533287 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-27 22:02:19.533305 | orchestrator | Saturday 27 September 2025 22:00:36 +0000 (0:00:00.815) 0:00:29.318 **** 2025-09-27 22:02:19.533316 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:02:19.533326 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:02:19.533335 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:02:19.533345 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:02:19.533374 | orchestrator | changed: [testbed-manager] 2025-09-27 22:02:19.533384 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:02:19.533393 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:02:19.533402 | orchestrator | 2025-09-27 22:02:19.533411 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-27 22:02:19.533421 | orchestrator | Saturday 27 September 2025 22:00:39 +0000 (0:00:03.810) 0:00:33.129 **** 2025-09-27 22:02:19.533430 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-27 22:02:19.533440 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-27 22:02:19.533450 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-27 22:02:19.533468 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-27 22:02:19.533478 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-27 22:02:19.533487 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-27 22:02:19.533497 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-27 22:02:19.533506 | orchestrator | 2025-09-27 22:02:19.533515 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-27 22:02:19.533524 | orchestrator | Saturday 27 September 2025 22:00:42 +0000 (0:00:03.064) 0:00:36.194 **** 2025-09-27 22:02:19.533533 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:02:19.533542 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:02:19.533551 | orchestrator | changed: [testbed-manager] 2025-09-27 22:02:19.533559 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:02:19.533568 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:02:19.533577 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:02:19.533586 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:02:19.533595 | orchestrator | 2025-09-27 22:02:19.533604 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-27 22:02:19.533613 | orchestrator | Saturday 27 September 2025 22:00:45 +0000 (0:00:02.904) 0:00:39.099 **** 2025-09-27 22:02:19.533630 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.533641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.533651 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.533671 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.533681 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.533700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.533711 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.533727 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.533737 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.533747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.533763 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.533772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.533782 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.533798 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.533809 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.533819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.533830 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.533845 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:02:19.534165 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.534187 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.534197 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.534205 | orchestrator | 2025-09-27 22:02:19.534214 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-27 22:02:19.534223 | orchestrator | Saturday 27 September 2025 22:00:49 +0000 (0:00:03.336) 0:00:42.436 **** 2025-09-27 22:02:19.534231 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-27 22:02:19.534260 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-27 22:02:19.534269 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-27 22:02:19.534276 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-27 22:02:19.534284 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-27 22:02:19.534291 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-27 22:02:19.534299 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-27 22:02:19.534307 | orchestrator | 2025-09-27 22:02:19.534318 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-27 22:02:19.534335 | orchestrator | Saturday 27 September 2025 22:00:51 +0000 (0:00:02.723) 0:00:45.159 **** 2025-09-27 22:02:19.534345 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-27 22:02:19.534354 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-27 22:02:19.534363 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-27 22:02:19.534372 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-27 22:02:19.534380 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-27 22:02:19.534392 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-27 22:02:19.534410 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-27 22:02:19.534418 | orchestrator | 2025-09-27 22:02:19.534426 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-27 22:02:19.534434 | orchestrator | Saturday 27 September 2025 22:00:54 +0000 (0:00:02.238) 0:00:47.398 **** 2025-09-27 22:02:19.534443 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.534452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.534477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.534487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.534495 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.534503 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.534515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.534534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.534549 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.534573 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 22:02:19.534587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.534599 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.534612 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.534626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.534652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.534665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.534680 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.534701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.534713 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.534725 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.534739 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:02:19.534750 | orchestrator | 2025-09-27 22:02:19.534762 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-27 22:02:19.534774 | orchestrator | Saturday 27 September 2025 22:00:57 +0000 (0:00:03.373) 0:00:50.771 **** 2025-09-27 22:02:19.534785 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:02:19.534797 | orchestrator | changed: [testbed-manager] 2025-09-27 22:02:19.534808 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:02:19.534827 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:02:19.534839 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:02:19.534851 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:02:19.534863 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:02:19.534879 | orchestrator | 2025-09-27 22:02:19.534891 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-27 22:02:19.534904 | orchestrator | Saturday 27 September 2025 22:00:59 +0000 (0:00:01.441) 0:00:52.213 **** 2025-09-27 22:02:19.534918 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:02:19.534930 | orchestrator | changed: [testbed-manager] 2025-09-27 22:02:19.534942 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:02:19.534953 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:02:19.534964 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:02:19.534977 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:02:19.534989 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:02:19.535000 | orchestrator | 2025-09-27 22:02:19.535011 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-27 22:02:19.535024 | orchestrator | Saturday 27 September 2025 22:01:00 +0000 (0:00:01.169) 0:00:53.382 **** 2025-09-27 22:02:19.535036 | orchestrator | 2025-09-27 22:02:19.535048 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-27 22:02:19.535067 | orchestrator | Saturday 27 September 2025 22:01:00 +0000 (0:00:00.061) 0:00:53.444 **** 2025-09-27 22:02:19.535081 | orchestrator | 2025-09-27 22:02:19.535090 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-27 22:02:19.535098 | orchestrator | Saturday 27 September 2025 22:01:00 +0000 (0:00:00.059) 0:00:53.503 **** 2025-09-27 22:02:19.535106 | orchestrator | 2025-09-27 22:02:19.535115 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-27 22:02:19.535123 | orchestrator | Saturday 27 September 2025 22:01:00 +0000 (0:00:00.059) 0:00:53.562 **** 2025-09-27 22:02:19.535131 | orchestrator | 2025-09-27 22:02:19.535139 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-27 22:02:19.535148 | orchestrator | Saturday 27 September 2025 22:01:00 +0000 (0:00:00.230) 0:00:53.792 **** 2025-09-27 22:02:19.535157 | orchestrator | 2025-09-27 22:02:19.535167 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-27 22:02:19.535177 | orchestrator | Saturday 27 September 2025 22:01:00 +0000 (0:00:00.061) 0:00:53.853 **** 2025-09-27 22:02:19.535188 | orchestrator | 2025-09-27 22:02:19.535199 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-27 22:02:19.535211 | orchestrator | Saturday 27 September 2025 22:01:00 +0000 (0:00:00.061) 0:00:53.915 **** 2025-09-27 22:02:19.535222 | orchestrator | 2025-09-27 22:02:19.535234 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-27 22:02:19.535271 | orchestrator | Saturday 27 September 2025 22:01:00 +0000 (0:00:00.081) 0:00:53.997 **** 2025-09-27 22:02:19.535284 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:02:19.535295 | orchestrator | changed: [testbed-manager] 2025-09-27 22:02:19.535306 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:02:19.535316 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:02:19.535327 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:02:19.535338 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:02:19.535349 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:02:19.535359 | orchestrator | 2025-09-27 22:02:19.535370 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-27 22:02:19.535382 | orchestrator | Saturday 27 September 2025 22:01:38 +0000 (0:00:37.260) 0:01:31.257 **** 2025-09-27 22:02:19.535393 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:02:19.535415 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:02:19.535424 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:02:19.535431 | orchestrator | changed: [testbed-manager] 2025-09-27 22:02:19.535438 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:02:19.535445 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:02:19.535463 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:02:19.535470 | orchestrator | 2025-09-27 22:02:19.535478 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-27 22:02:19.535485 | orchestrator | Saturday 27 September 2025 22:02:06 +0000 (0:00:28.850) 0:02:00.107 **** 2025-09-27 22:02:19.535492 | orchestrator | ok: [testbed-manager] 2025-09-27 22:02:19.535502 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:02:19.535511 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:02:19.535520 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:02:19.535529 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:02:19.535536 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:02:19.535544 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:02:19.535552 | orchestrator | 2025-09-27 22:02:19.535560 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-27 22:02:19.535567 | orchestrator | Saturday 27 September 2025 22:02:09 +0000 (0:00:02.150) 0:02:02.257 **** 2025-09-27 22:02:19.535575 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:02:19.535582 | orchestrator | changed: [testbed-manager] 2025-09-27 22:02:19.535590 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:02:19.535597 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:02:19.535605 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:02:19.535612 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:02:19.535620 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:02:19.535628 | orchestrator | 2025-09-27 22:02:19.535636 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:02:19.535645 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-27 22:02:19.535653 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-27 22:02:19.535659 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-27 22:02:19.535664 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-27 22:02:19.535669 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-27 22:02:19.535674 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-27 22:02:19.535678 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-27 22:02:19.535683 | orchestrator | 2025-09-27 22:02:19.535688 | orchestrator | 2025-09-27 22:02:19.535693 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:02:19.535698 | orchestrator | Saturday 27 September 2025 22:02:17 +0000 (0:00:08.775) 0:02:11.033 **** 2025-09-27 22:02:19.535703 | orchestrator | =============================================================================== 2025-09-27 22:02:19.535708 | orchestrator | common : Restart fluentd container ------------------------------------- 37.26s 2025-09-27 22:02:19.535718 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 28.85s 2025-09-27 22:02:19.535724 | orchestrator | common : Restart cron container ----------------------------------------- 8.78s 2025-09-27 22:02:19.535728 | orchestrator | common : Copying over config.json files for services -------------------- 6.95s 2025-09-27 22:02:19.535733 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.59s 2025-09-27 22:02:19.535738 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.87s 2025-09-27 22:02:19.535743 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.81s 2025-09-27 22:02:19.535757 | orchestrator | common : Check common containers ---------------------------------------- 3.37s 2025-09-27 22:02:19.535762 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.34s 2025-09-27 22:02:19.535767 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.06s 2025-09-27 22:02:19.535772 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.95s 2025-09-27 22:02:19.535776 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.91s 2025-09-27 22:02:19.535781 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.72s 2025-09-27 22:02:19.535786 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.24s 2025-09-27 22:02:19.535791 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.15s 2025-09-27 22:02:19.535796 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.68s 2025-09-27 22:02:19.535801 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.58s 2025-09-27 22:02:19.535805 | orchestrator | common : Creating log volume -------------------------------------------- 1.44s 2025-09-27 22:02:19.535810 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.28s 2025-09-27 22:02:19.535815 | orchestrator | common : include_tasks -------------------------------------------------- 1.24s 2025-09-27 22:02:19.535826 | orchestrator | 2025-09-27 22:02:19 | INFO  | Task 73d2172e-cfbb-4092-8629-22e2b7639456 is in state STARTED 2025-09-27 22:02:19.535831 | orchestrator | 2025-09-27 22:02:19 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:02:19.536459 | orchestrator | 2025-09-27 22:02:19 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:02:19.536482 | orchestrator | 2025-09-27 22:02:19 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:02:22.566405 | orchestrator | 2025-09-27 22:02:22 | INFO  | Task ddeffc42-33e7-4b3c-a2d3-9e7c5aa3b768 is in state STARTED 2025-09-27 22:02:22.566537 | orchestrator | 2025-09-27 22:02:22 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state STARTED 2025-09-27 22:02:22.566883 | orchestrator | 2025-09-27 22:02:22 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:02:22.567491 | orchestrator | 2025-09-27 22:02:22 | INFO  | Task 73d2172e-cfbb-4092-8629-22e2b7639456 is in state STARTED 2025-09-27 22:02:22.568699 | orchestrator | 2025-09-27 22:02:22 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:02:22.569648 | orchestrator | 2025-09-27 22:02:22 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:02:22.569674 | orchestrator | 2025-09-27 22:02:22 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:02:25.612649 | orchestrator | 2025-09-27 22:02:25 | INFO  | Task ddeffc42-33e7-4b3c-a2d3-9e7c5aa3b768 is in state STARTED 2025-09-27 22:02:25.612734 | orchestrator | 2025-09-27 22:02:25 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state STARTED 2025-09-27 22:02:25.613227 | orchestrator | 2025-09-27 22:02:25 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:02:25.613760 | orchestrator | 2025-09-27 22:02:25 | INFO  | Task 73d2172e-cfbb-4092-8629-22e2b7639456 is in state STARTED 2025-09-27 22:02:25.614343 | orchestrator | 2025-09-27 22:02:25 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:02:25.616821 | orchestrator | 2025-09-27 22:02:25 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:02:25.616872 | orchestrator | 2025-09-27 22:02:25 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:02:28.682102 | orchestrator | 2025-09-27 22:02:28 | INFO  | Task ddeffc42-33e7-4b3c-a2d3-9e7c5aa3b768 is in state STARTED 2025-09-27 22:02:28.682302 | orchestrator | 2025-09-27 22:02:28 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state STARTED 2025-09-27 22:02:28.682324 | orchestrator | 2025-09-27 22:02:28 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:02:28.682336 | orchestrator | 2025-09-27 22:02:28 | INFO  | Task 73d2172e-cfbb-4092-8629-22e2b7639456 is in state STARTED 2025-09-27 22:02:28.682363 | orchestrator | 2025-09-27 22:02:28 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:02:28.682375 | orchestrator | 2025-09-27 22:02:28 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:02:28.682386 | orchestrator | 2025-09-27 22:02:28 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:02:31.680556 | orchestrator | 2025-09-27 22:02:31 | INFO  | Task ddeffc42-33e7-4b3c-a2d3-9e7c5aa3b768 is in state STARTED 2025-09-27 22:02:31.680661 | orchestrator | 2025-09-27 22:02:31 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state STARTED 2025-09-27 22:02:31.681380 | orchestrator | 2025-09-27 22:02:31 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:02:31.681796 | orchestrator | 2025-09-27 22:02:31 | INFO  | Task 73d2172e-cfbb-4092-8629-22e2b7639456 is in state STARTED 2025-09-27 22:02:31.682404 | orchestrator | 2025-09-27 22:02:31 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:02:31.682862 | orchestrator | 2025-09-27 22:02:31 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:02:31.682894 | orchestrator | 2025-09-27 22:02:31 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:02:34.714965 | orchestrator | 2025-09-27 22:02:34 | INFO  | Task ddeffc42-33e7-4b3c-a2d3-9e7c5aa3b768 is in state STARTED 2025-09-27 22:02:34.715083 | orchestrator | 2025-09-27 22:02:34 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state STARTED 2025-09-27 22:02:34.715630 | orchestrator | 2025-09-27 22:02:34 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:02:34.716208 | orchestrator | 2025-09-27 22:02:34 | INFO  | Task 73d2172e-cfbb-4092-8629-22e2b7639456 is in state STARTED 2025-09-27 22:02:34.716939 | orchestrator | 2025-09-27 22:02:34 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:02:34.717460 | orchestrator | 2025-09-27 22:02:34 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:02:34.717555 | orchestrator | 2025-09-27 22:02:34 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:02:37.752766 | orchestrator | 2025-09-27 22:02:37 | INFO  | Task ddeffc42-33e7-4b3c-a2d3-9e7c5aa3b768 is in state STARTED 2025-09-27 22:02:37.754110 | orchestrator | 2025-09-27 22:02:37 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state STARTED 2025-09-27 22:02:37.754603 | orchestrator | 2025-09-27 22:02:37 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:02:37.755446 | orchestrator | 2025-09-27 22:02:37 | INFO  | Task 73d2172e-cfbb-4092-8629-22e2b7639456 is in state STARTED 2025-09-27 22:02:37.756052 | orchestrator | 2025-09-27 22:02:37 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:02:37.758148 | orchestrator | 2025-09-27 22:02:37 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:02:37.758201 | orchestrator | 2025-09-27 22:02:37 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:02:40.801721 | orchestrator | 2025-09-27 22:02:40 | INFO  | Task ddeffc42-33e7-4b3c-a2d3-9e7c5aa3b768 is in state STARTED 2025-09-27 22:02:40.801872 | orchestrator | 2025-09-27 22:02:40 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state STARTED 2025-09-27 22:02:40.802361 | orchestrator | 2025-09-27 22:02:40 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:02:40.802732 | orchestrator | 2025-09-27 22:02:40 | INFO  | Task 73d2172e-cfbb-4092-8629-22e2b7639456 is in state SUCCESS 2025-09-27 22:02:40.804343 | orchestrator | 2025-09-27 22:02:40 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:02:40.805024 | orchestrator | 2025-09-27 22:02:40 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:02:40.805671 | orchestrator | 2025-09-27 22:02:40 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:02:40.805694 | orchestrator | 2025-09-27 22:02:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:02:43.831991 | orchestrator | 2025-09-27 22:02:43 | INFO  | Task ddeffc42-33e7-4b3c-a2d3-9e7c5aa3b768 is in state STARTED 2025-09-27 22:02:43.832098 | orchestrator | 2025-09-27 22:02:43 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state STARTED 2025-09-27 22:02:43.832360 | orchestrator | 2025-09-27 22:02:43 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:02:43.832924 | orchestrator | 2025-09-27 22:02:43 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:02:43.835409 | orchestrator | 2025-09-27 22:02:43 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:02:43.835465 | orchestrator | 2025-09-27 22:02:43 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:02:43.835486 | orchestrator | 2025-09-27 22:02:43 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:02:46.956965 | orchestrator | 2025-09-27 22:02:46 | INFO  | Task ddeffc42-33e7-4b3c-a2d3-9e7c5aa3b768 is in state STARTED 2025-09-27 22:02:46.957381 | orchestrator | 2025-09-27 22:02:46 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state STARTED 2025-09-27 22:02:46.959128 | orchestrator | 2025-09-27 22:02:46 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:02:46.960293 | orchestrator | 2025-09-27 22:02:46 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:02:46.962635 | orchestrator | 2025-09-27 22:02:46 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:02:46.964484 | orchestrator | 2025-09-27 22:02:46 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:02:46.964523 | orchestrator | 2025-09-27 22:02:46 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:02:50.013854 | orchestrator | 2025-09-27 22:02:50 | INFO  | Task ddeffc42-33e7-4b3c-a2d3-9e7c5aa3b768 is in state SUCCESS 2025-09-27 22:02:50.015492 | orchestrator | 2025-09-27 22:02:50.015546 | orchestrator | 2025-09-27 22:02:50.015560 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:02:50.015572 | orchestrator | 2025-09-27 22:02:50.015583 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:02:50.015596 | orchestrator | Saturday 27 September 2025 22:02:23 +0000 (0:00:00.517) 0:00:00.517 **** 2025-09-27 22:02:50.015607 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:02:50.015621 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:02:50.015640 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:02:50.015659 | orchestrator | 2025-09-27 22:02:50.015677 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:02:50.015695 | orchestrator | Saturday 27 September 2025 22:02:24 +0000 (0:00:00.342) 0:00:00.860 **** 2025-09-27 22:02:50.015745 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-27 22:02:50.015764 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-27 22:02:50.015783 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-27 22:02:50.015795 | orchestrator | 2025-09-27 22:02:50.015806 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-27 22:02:50.015817 | orchestrator | 2025-09-27 22:02:50.015828 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-27 22:02:50.015839 | orchestrator | Saturday 27 September 2025 22:02:24 +0000 (0:00:00.839) 0:00:01.700 **** 2025-09-27 22:02:50.015850 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:02:50.015862 | orchestrator | 2025-09-27 22:02:50.015873 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-27 22:02:50.015883 | orchestrator | Saturday 27 September 2025 22:02:26 +0000 (0:00:01.096) 0:00:02.797 **** 2025-09-27 22:02:50.015894 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-27 22:02:50.015905 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-27 22:02:50.015916 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-27 22:02:50.015926 | orchestrator | 2025-09-27 22:02:50.015938 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-27 22:02:50.015949 | orchestrator | Saturday 27 September 2025 22:02:27 +0000 (0:00:01.020) 0:00:03.817 **** 2025-09-27 22:02:50.015960 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-27 22:02:50.015971 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-27 22:02:50.015982 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-27 22:02:50.015992 | orchestrator | 2025-09-27 22:02:50.016003 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-27 22:02:50.016014 | orchestrator | Saturday 27 September 2025 22:02:29 +0000 (0:00:01.982) 0:00:05.800 **** 2025-09-27 22:02:50.016025 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:02:50.016036 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:02:50.016047 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:02:50.016057 | orchestrator | 2025-09-27 22:02:50.016068 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-27 22:02:50.016079 | orchestrator | Saturday 27 September 2025 22:02:30 +0000 (0:00:01.814) 0:00:07.614 **** 2025-09-27 22:02:50.016097 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:02:50.016113 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:02:50.016141 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:02:50.016160 | orchestrator | 2025-09-27 22:02:50.016177 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:02:50.016195 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:02:50.016214 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:02:50.016242 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:02:50.016288 | orchestrator | 2025-09-27 22:02:50.016308 | orchestrator | 2025-09-27 22:02:50.016325 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:02:50.016343 | orchestrator | Saturday 27 September 2025 22:02:37 +0000 (0:00:06.540) 0:00:14.155 **** 2025-09-27 22:02:50.016361 | orchestrator | =============================================================================== 2025-09-27 22:02:50.016379 | orchestrator | memcached : Restart memcached container --------------------------------- 6.54s 2025-09-27 22:02:50.016398 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.98s 2025-09-27 22:02:50.016415 | orchestrator | memcached : Check memcached container ----------------------------------- 1.81s 2025-09-27 22:02:50.016449 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.10s 2025-09-27 22:02:50.016467 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.02s 2025-09-27 22:02:50.016482 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2025-09-27 22:02:50.016493 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-09-27 22:02:50.016503 | orchestrator | 2025-09-27 22:02:50.016514 | orchestrator | 2025-09-27 22:02:50.016525 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:02:50.016535 | orchestrator | 2025-09-27 22:02:50.016546 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:02:50.016557 | orchestrator | Saturday 27 September 2025 22:02:24 +0000 (0:00:00.341) 0:00:00.341 **** 2025-09-27 22:02:50.016568 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:02:50.016579 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:02:50.016589 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:02:50.016600 | orchestrator | 2025-09-27 22:02:50.016611 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:02:50.016638 | orchestrator | Saturday 27 September 2025 22:02:25 +0000 (0:00:00.586) 0:00:00.927 **** 2025-09-27 22:02:50.016649 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-27 22:02:50.016660 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-27 22:02:50.016671 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-27 22:02:50.016682 | orchestrator | 2025-09-27 22:02:50.016692 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-27 22:02:50.016703 | orchestrator | 2025-09-27 22:02:50.016714 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-27 22:02:50.016726 | orchestrator | Saturday 27 September 2025 22:02:25 +0000 (0:00:00.695) 0:00:01.622 **** 2025-09-27 22:02:50.016737 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:02:50.016747 | orchestrator | 2025-09-27 22:02:50.016758 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-27 22:02:50.016769 | orchestrator | Saturday 27 September 2025 22:02:26 +0000 (0:00:00.937) 0:00:02.560 **** 2025-09-27 22:02:50.016784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.016801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.016813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.016833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.016845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.016865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.016877 | orchestrator | 2025-09-27 22:02:50.016888 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-27 22:02:50.016899 | orchestrator | Saturday 27 September 2025 22:02:28 +0000 (0:00:01.488) 0:00:04.048 **** 2025-09-27 22:02:50.016911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.016923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.016942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.016966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.016978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.016996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.017008 | orchestrator | 2025-09-27 22:02:50.017019 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-27 22:02:50.017030 | orchestrator | Saturday 27 September 2025 22:02:31 +0000 (0:00:03.138) 0:00:07.186 **** 2025-09-27 22:02:50.017041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.017053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.017064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.017082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.017099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.017110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.017122 | orchestrator | 2025-09-27 22:02:50.017139 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-27 22:02:50.017150 | orchestrator | Saturday 27 September 2025 22:02:33 +0000 (0:00:02.405) 0:00:09.592 **** 2025-09-27 22:02:50.017161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.017173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.017185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.017205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.017221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.017233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 22:02:50.017244 | orchestrator | 2025-09-27 22:02:50.017255 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-27 22:02:50.017287 | orchestrator | Saturday 27 September 2025 22:02:35 +0000 (0:00:01.586) 0:00:11.178 **** 2025-09-27 22:02:50.017298 | orchestrator | 2025-09-27 22:02:50.017309 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-27 22:02:50.017326 | orchestrator | Saturday 27 September 2025 22:02:35 +0000 (0:00:00.092) 0:00:11.270 **** 2025-09-27 22:02:50.017337 | orchestrator | 2025-09-27 22:02:50.017349 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-27 22:02:50.017359 | orchestrator | Saturday 27 September 2025 22:02:35 +0000 (0:00:00.067) 0:00:11.337 **** 2025-09-27 22:02:50.017370 | orchestrator | 2025-09-27 22:02:50.017381 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-27 22:02:50.017392 | orchestrator | Saturday 27 September 2025 22:02:35 +0000 (0:00:00.063) 0:00:11.401 **** 2025-09-27 22:02:50.017402 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:02:50.017413 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:02:50.017424 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:02:50.017435 | orchestrator | 2025-09-27 22:02:50.017446 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-27 22:02:50.017457 | orchestrator | Saturday 27 September 2025 22:02:38 +0000 (0:00:03.153) 0:00:14.555 **** 2025-09-27 22:02:50.017468 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:02:50.017479 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:02:50.017489 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:02:50.017500 | orchestrator | 2025-09-27 22:02:50.017511 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:02:50.017530 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:02:50.017541 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:02:50.017552 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:02:50.017563 | orchestrator | 2025-09-27 22:02:50.017574 | orchestrator | 2025-09-27 22:02:50.017584 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:02:50.017595 | orchestrator | Saturday 27 September 2025 22:02:48 +0000 (0:00:09.438) 0:00:23.993 **** 2025-09-27 22:02:50.017606 | orchestrator | =============================================================================== 2025-09-27 22:02:50.017617 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.44s 2025-09-27 22:02:50.017628 | orchestrator | redis : Restart redis container ----------------------------------------- 3.15s 2025-09-27 22:02:50.017639 | orchestrator | redis : Copying over default config.json files -------------------------- 3.14s 2025-09-27 22:02:50.017649 | orchestrator | redis : Copying over redis config files --------------------------------- 2.41s 2025-09-27 22:02:50.017660 | orchestrator | redis : Check redis containers ------------------------------------------ 1.59s 2025-09-27 22:02:50.017671 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.49s 2025-09-27 22:02:50.017682 | orchestrator | redis : include_tasks --------------------------------------------------- 0.94s 2025-09-27 22:02:50.017693 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2025-09-27 22:02:50.017704 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.59s 2025-09-27 22:02:50.017715 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.22s 2025-09-27 22:02:50.027029 | orchestrator | 2025-09-27 22:02:50 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state STARTED 2025-09-27 22:02:50.027432 | orchestrator | 2025-09-27 22:02:50 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:02:50.034671 | orchestrator | 2025-09-27 22:02:50 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:02:50.034759 | orchestrator | 2025-09-27 22:02:50 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:02:50.037194 | orchestrator | 2025-09-27 22:02:50 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:02:50.037239 | orchestrator | 2025-09-27 22:02:50 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:02:53.067784 | orchestrator | 2025-09-27 22:02:53 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state STARTED 2025-09-27 22:02:53.068024 | orchestrator | 2025-09-27 22:02:53 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:02:53.068620 | orchestrator | 2025-09-27 22:02:53 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:02:53.069110 | orchestrator | 2025-09-27 22:02:53 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:02:53.070191 | orchestrator | 2025-09-27 22:02:53 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:02:53.070360 | orchestrator | 2025-09-27 22:02:53 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:02:56.151890 | orchestrator | 2025-09-27 22:02:56 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state STARTED 2025-09-27 22:02:56.152048 | orchestrator | 2025-09-27 22:02:56 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:02:56.152067 | orchestrator | 2025-09-27 22:02:56 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:02:56.152106 | orchestrator | 2025-09-27 22:02:56 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:02:56.152117 | orchestrator | 2025-09-27 22:02:56 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:02:56.152129 | orchestrator | 2025-09-27 22:02:56 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:02:59.162863 | orchestrator | 2025-09-27 22:02:59 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state STARTED 2025-09-27 22:02:59.162955 | orchestrator | 2025-09-27 22:02:59 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:02:59.164411 | orchestrator | 2025-09-27 22:02:59 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:02:59.164856 | orchestrator | 2025-09-27 22:02:59 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:02:59.165766 | orchestrator | 2025-09-27 22:02:59 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:02:59.165796 | orchestrator | 2025-09-27 22:02:59 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:03:02.218512 | orchestrator | 2025-09-27 22:03:02 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state STARTED 2025-09-27 22:03:02.218620 | orchestrator | 2025-09-27 22:03:02 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:03:02.218635 | orchestrator | 2025-09-27 22:03:02 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:03:02.218647 | orchestrator | 2025-09-27 22:03:02 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:03:02.218658 | orchestrator | 2025-09-27 22:03:02 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:03:02.218670 | orchestrator | 2025-09-27 22:03:02 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:03:05.242726 | orchestrator | 2025-09-27 22:03:05 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state STARTED 2025-09-27 22:03:05.245910 | orchestrator | 2025-09-27 22:03:05 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:03:05.246395 | orchestrator | 2025-09-27 22:03:05 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:03:05.247108 | orchestrator | 2025-09-27 22:03:05 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:03:05.247803 | orchestrator | 2025-09-27 22:03:05 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:03:05.247842 | orchestrator | 2025-09-27 22:03:05 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:03:08.284822 | orchestrator | 2025-09-27 22:03:08 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state STARTED 2025-09-27 22:03:08.284904 | orchestrator | 2025-09-27 22:03:08 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:03:08.292657 | orchestrator | 2025-09-27 22:03:08 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:03:08.292702 | orchestrator | 2025-09-27 22:03:08 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:03:08.292707 | orchestrator | 2025-09-27 22:03:08 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:03:08.292712 | orchestrator | 2025-09-27 22:03:08 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:03:11.332932 | orchestrator | 2025-09-27 22:03:11 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state STARTED 2025-09-27 22:03:11.333066 | orchestrator | 2025-09-27 22:03:11 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:03:11.334508 | orchestrator | 2025-09-27 22:03:11 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:03:11.336523 | orchestrator | 2025-09-27 22:03:11 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:03:11.337741 | orchestrator | 2025-09-27 22:03:11 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:03:11.337795 | orchestrator | 2025-09-27 22:03:11 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:03:14.369034 | orchestrator | 2025-09-27 22:03:14 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state STARTED 2025-09-27 22:03:14.370173 | orchestrator | 2025-09-27 22:03:14 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:03:14.370562 | orchestrator | 2025-09-27 22:03:14 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:03:14.374580 | orchestrator | 2025-09-27 22:03:14 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:03:14.374688 | orchestrator | 2025-09-27 22:03:14 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:03:14.374706 | orchestrator | 2025-09-27 22:03:14 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:03:17.402793 | orchestrator | 2025-09-27 22:03:17 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state STARTED 2025-09-27 22:03:17.403025 | orchestrator | 2025-09-27 22:03:17 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:03:17.403791 | orchestrator | 2025-09-27 22:03:17 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:03:17.404117 | orchestrator | 2025-09-27 22:03:17 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:03:17.404817 | orchestrator | 2025-09-27 22:03:17 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:03:17.404856 | orchestrator | 2025-09-27 22:03:17 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:03:20.438238 | orchestrator | 2025-09-27 22:03:20 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state STARTED 2025-09-27 22:03:20.442945 | orchestrator | 2025-09-27 22:03:20 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:03:20.443040 | orchestrator | 2025-09-27 22:03:20 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:03:20.443092 | orchestrator | 2025-09-27 22:03:20 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:03:20.444933 | orchestrator | 2025-09-27 22:03:20 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:03:20.444969 | orchestrator | 2025-09-27 22:03:20 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:03:23.474156 | orchestrator | 2025-09-27 22:03:23.474241 | orchestrator | 2025-09-27 22:03:23.474251 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:03:23.474259 | orchestrator | 2025-09-27 22:03:23.474266 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:03:23.474274 | orchestrator | Saturday 27 September 2025 22:02:24 +0000 (0:00:00.536) 0:00:00.536 **** 2025-09-27 22:03:23.474380 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:03:23.474392 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:03:23.474399 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:03:23.474406 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:03:23.474412 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:03:23.474437 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:03:23.474444 | orchestrator | 2025-09-27 22:03:23.474451 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:03:23.474458 | orchestrator | Saturday 27 September 2025 22:02:25 +0000 (0:00:01.215) 0:00:01.752 **** 2025-09-27 22:03:23.474464 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-27 22:03:23.474472 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-27 22:03:23.474478 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-27 22:03:23.474496 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-27 22:03:23.474503 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-27 22:03:23.474509 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-27 22:03:23.474520 | orchestrator | 2025-09-27 22:03:23.474527 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-27 22:03:23.474534 | orchestrator | 2025-09-27 22:03:23.474540 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-27 22:03:23.474547 | orchestrator | Saturday 27 September 2025 22:02:26 +0000 (0:00:01.151) 0:00:02.903 **** 2025-09-27 22:03:23.474554 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:03:23.474562 | orchestrator | 2025-09-27 22:03:23.474569 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-27 22:03:23.474575 | orchestrator | Saturday 27 September 2025 22:02:28 +0000 (0:00:01.726) 0:00:04.630 **** 2025-09-27 22:03:23.474582 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-27 22:03:23.474589 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-27 22:03:23.474595 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-27 22:03:23.474602 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-27 22:03:23.474608 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-27 22:03:23.474615 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-27 22:03:23.474621 | orchestrator | 2025-09-27 22:03:23.474627 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-27 22:03:23.474634 | orchestrator | Saturday 27 September 2025 22:02:29 +0000 (0:00:01.611) 0:00:06.242 **** 2025-09-27 22:03:23.474640 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-27 22:03:23.474647 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-27 22:03:23.474653 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-27 22:03:23.474660 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-27 22:03:23.474666 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-27 22:03:23.474673 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-27 22:03:23.474679 | orchestrator | 2025-09-27 22:03:23.474686 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-27 22:03:23.474692 | orchestrator | Saturday 27 September 2025 22:02:31 +0000 (0:00:01.500) 0:00:07.742 **** 2025-09-27 22:03:23.474699 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-27 22:03:23.474705 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:23.474713 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-27 22:03:23.474719 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:23.474725 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-27 22:03:23.474732 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:23.474738 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-27 22:03:23.474745 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:03:23.474752 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-27 22:03:23.474764 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:03:23.474770 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-27 22:03:23.474777 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:03:23.474783 | orchestrator | 2025-09-27 22:03:23.474790 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-27 22:03:23.474796 | orchestrator | Saturday 27 September 2025 22:02:32 +0000 (0:00:01.091) 0:00:08.834 **** 2025-09-27 22:03:23.474803 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:23.474809 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:23.474816 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:23.474822 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:03:23.474828 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:03:23.474835 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:03:23.474841 | orchestrator | 2025-09-27 22:03:23.474848 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-27 22:03:23.474854 | orchestrator | Saturday 27 September 2025 22:02:33 +0000 (0:00:00.730) 0:00:09.565 **** 2025-09-27 22:03:23.474881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 22:03:23.474894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 22:03:23.474902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 22:03:23.474909 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 22:03:23.474921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 22:03:23.474928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 22:03:23.474941 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 22:03:23.474957 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 22:03:23.474965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 22:03:23.474971 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 22:03:23.474982 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 22:03:23.474994 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475001 | orchestrator | 2025-09-27 22:03:23.475008 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-27 22:03:23.475015 | orchestrator | Saturday 27 September 2025 22:02:34 +0000 (0:00:01.455) 0:00:11.020 **** 2025-09-27 22:03:23.475024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475056 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475064 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475083 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475107 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475127 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475140 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475148 | orchestrator | 2025-09-27 22:03:23.475160 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-27 22:03:23.475167 | orchestrator | Saturday 27 September 2025 22:02:37 +0000 (0:00:02.478) 0:00:13.499 **** 2025-09-27 22:03:23.475175 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:23.475182 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:23.475189 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:23.475196 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:03:23.475203 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:03:23.475211 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:03:23.475218 | orchestrator | 2025-09-27 22:03:23.475226 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-27 22:03:23.475237 | orchestrator | Saturday 27 September 2025 22:02:38 +0000 (0:00:01.422) 0:00:14.921 **** 2025-09-27 22:03:23.475245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475271 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475358 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475365 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475373 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475385 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475397 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 22:03:23.475409 | orchestrator | 2025-09-27 22:03:23.475415 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-27 22:03:23.475422 | orchestrator | Saturday 27 September 2025 22:02:41 +0000 (0:00:03.212) 0:00:18.133 **** 2025-09-27 22:03:23.475428 | orchestrator | 2025-09-27 22:03:23.475434 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-27 22:03:23.475440 | orchestrator | Saturday 27 September 2025 22:02:41 +0000 (0:00:00.269) 0:00:18.403 **** 2025-09-27 22:03:23.475446 | orchestrator | 2025-09-27 22:03:23.475453 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-27 22:03:23.475459 | orchestrator | Saturday 27 September 2025 22:02:42 +0000 (0:00:00.166) 0:00:18.570 **** 2025-09-27 22:03:23.475465 | orchestrator | 2025-09-27 22:03:23.475471 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-27 22:03:23.475477 | orchestrator | Saturday 27 September 2025 22:02:42 +0000 (0:00:00.226) 0:00:18.797 **** 2025-09-27 22:03:23.475483 | orchestrator | 2025-09-27 22:03:23.475489 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-27 22:03:23.475495 | orchestrator | Saturday 27 September 2025 22:02:42 +0000 (0:00:00.342) 0:00:19.139 **** 2025-09-27 22:03:23.475502 | orchestrator | 2025-09-27 22:03:23.475508 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-27 22:03:23.475514 | orchestrator | Saturday 27 September 2025 22:02:42 +0000 (0:00:00.261) 0:00:19.401 **** 2025-09-27 22:03:23.475520 | orchestrator | 2025-09-27 22:03:23.475526 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-27 22:03:23.475532 | orchestrator | Saturday 27 September 2025 22:02:43 +0000 (0:00:00.114) 0:00:19.515 **** 2025-09-27 22:03:23.475539 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:03:23.475545 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:03:23.475551 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:03:23.475557 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:03:23.475563 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:03:23.475569 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:03:23.475576 | orchestrator | 2025-09-27 22:03:23.475582 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-27 22:03:23.475588 | orchestrator | Saturday 27 September 2025 22:02:50 +0000 (0:00:07.535) 0:00:27.050 **** 2025-09-27 22:03:23.475594 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:03:23.475600 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:03:23.475606 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:03:23.475613 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:03:23.475619 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:03:23.475625 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:03:23.475631 | orchestrator | 2025-09-27 22:03:23.475637 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-27 22:03:23.475643 | orchestrator | Saturday 27 September 2025 22:02:51 +0000 (0:00:01.143) 0:00:28.193 **** 2025-09-27 22:03:23.475649 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:03:23.475656 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:03:23.475662 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:03:23.475668 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:03:23.475674 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:03:23.475680 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:03:23.475686 | orchestrator | 2025-09-27 22:03:23.475692 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-27 22:03:23.475699 | orchestrator | Saturday 27 September 2025 22:02:59 +0000 (0:00:08.229) 0:00:36.422 **** 2025-09-27 22:03:23.475705 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-27 22:03:23.475711 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-27 22:03:23.475727 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-27 22:03:23.475740 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-27 22:03:23.475746 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-27 22:03:23.475756 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-27 22:03:23.475763 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-27 22:03:23.475769 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-27 22:03:23.475775 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-27 22:03:23.475781 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-27 22:03:23.475787 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-27 22:03:23.475793 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-27 22:03:23.475800 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-27 22:03:23.475809 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-27 22:03:23.475815 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-27 22:03:23.475822 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-27 22:03:23.475828 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-27 22:03:23.475834 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-27 22:03:23.475840 | orchestrator | 2025-09-27 22:03:23.475846 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-27 22:03:23.475852 | orchestrator | Saturday 27 September 2025 22:03:07 +0000 (0:00:07.819) 0:00:44.242 **** 2025-09-27 22:03:23.475859 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-27 22:03:23.475865 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-27 22:03:23.475872 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:03:23.475878 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:03:23.475884 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-27 22:03:23.475890 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:03:23.475896 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-27 22:03:23.475902 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-27 22:03:23.475909 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-27 22:03:23.475915 | orchestrator | 2025-09-27 22:03:23.475921 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-27 22:03:23.475927 | orchestrator | Saturday 27 September 2025 22:03:10 +0000 (0:00:02.459) 0:00:46.702 **** 2025-09-27 22:03:23.475933 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-27 22:03:23.475939 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:03:23.475946 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-27 22:03:23.475952 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:03:23.475958 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-27 22:03:23.475964 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:03:23.475970 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-27 22:03:23.475976 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-27 22:03:23.475988 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-27 22:03:23.475994 | orchestrator | 2025-09-27 22:03:23.476000 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-27 22:03:23.476006 | orchestrator | Saturday 27 September 2025 22:03:14 +0000 (0:00:03.970) 0:00:50.672 **** 2025-09-27 22:03:23.476012 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:03:23.476018 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:03:23.476025 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:03:23.476031 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:03:23.476037 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:03:23.476043 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:03:23.476049 | orchestrator | 2025-09-27 22:03:23.476055 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:03:23.476062 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-27 22:03:23.476069 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-27 22:03:23.476075 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-27 22:03:23.476082 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-27 22:03:23.476088 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-27 22:03:23.476099 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-27 22:03:23.476105 | orchestrator | 2025-09-27 22:03:23.476111 | orchestrator | 2025-09-27 22:03:23.476118 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:03:23.476124 | orchestrator | Saturday 27 September 2025 22:03:23 +0000 (0:00:08.783) 0:00:59.456 **** 2025-09-27 22:03:23.476130 | orchestrator | =============================================================================== 2025-09-27 22:03:23.476136 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.01s 2025-09-27 22:03:23.476143 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.82s 2025-09-27 22:03:23.476149 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 7.54s 2025-09-27 22:03:23.476155 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.97s 2025-09-27 22:03:23.476161 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.21s 2025-09-27 22:03:23.476167 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.48s 2025-09-27 22:03:23.476192 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.46s 2025-09-27 22:03:23.476198 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.73s 2025-09-27 22:03:23.476204 | orchestrator | module-load : Load modules ---------------------------------------------- 1.61s 2025-09-27 22:03:23.476210 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.50s 2025-09-27 22:03:23.476217 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.46s 2025-09-27 22:03:23.476223 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.42s 2025-09-27 22:03:23.476229 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.38s 2025-09-27 22:03:23.476235 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.22s 2025-09-27 22:03:23.476241 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.15s 2025-09-27 22:03:23.476247 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.14s 2025-09-27 22:03:23.476258 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.09s 2025-09-27 22:03:23.476264 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.73s 2025-09-27 22:03:23.476271 | orchestrator | 2025-09-27 22:03:23 | INFO  | Task dd23de6c-e446-4106-9138-c4d1cc5de0a0 is in state SUCCESS 2025-09-27 22:03:23.476277 | orchestrator | 2025-09-27 22:03:23 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:03:23.476298 | orchestrator | 2025-09-27 22:03:23 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:03:23.476304 | orchestrator | 2025-09-27 22:03:23 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:03:23.476310 | orchestrator | 2025-09-27 22:03:23 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:03:23.476316 | orchestrator | 2025-09-27 22:03:23 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:03:26.609448 | orchestrator | 2025-09-27 22:03:26 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:03:26.609877 | orchestrator | 2025-09-27 22:03:26 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:03:26.610466 | orchestrator | 2025-09-27 22:03:26 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:03:26.611319 | orchestrator | 2025-09-27 22:03:26 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:03:26.612025 | orchestrator | 2025-09-27 22:03:26 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:03:26.612197 | orchestrator | 2025-09-27 22:03:26 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:03:29.658762 | orchestrator | 2025-09-27 22:03:29 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:03:29.685093 | orchestrator | 2025-09-27 22:03:29 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:03:29.685462 | orchestrator | 2025-09-27 22:03:29 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:03:29.686574 | orchestrator | 2025-09-27 22:03:29 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:03:29.687175 | orchestrator | 2025-09-27 22:03:29 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:03:29.687255 | orchestrator | 2025-09-27 22:03:29 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:03:32.940190 | orchestrator | 2025-09-27 22:03:32 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:03:32.940368 | orchestrator | 2025-09-27 22:03:32 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:03:32.940386 | orchestrator | 2025-09-27 22:03:32 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:03:32.940399 | orchestrator | 2025-09-27 22:03:32 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:03:32.940411 | orchestrator | 2025-09-27 22:03:32 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:03:32.940422 | orchestrator | 2025-09-27 22:03:32 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:03:36.011798 | orchestrator | 2025-09-27 22:03:36 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:03:36.011907 | orchestrator | 2025-09-27 22:03:36 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:03:36.013153 | orchestrator | 2025-09-27 22:03:36 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:03:36.015558 | orchestrator | 2025-09-27 22:03:36 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:03:36.016448 | orchestrator | 2025-09-27 22:03:36 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:03:36.016486 | orchestrator | 2025-09-27 22:03:36 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:03:39.061854 | orchestrator | 2025-09-27 22:03:39 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state STARTED 2025-09-27 22:03:39.062482 | orchestrator | 2025-09-27 22:03:39 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:03:39.063620 | orchestrator | 2025-09-27 22:03:39 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:03:39.066109 | orchestrator | 2025-09-27 22:03:39 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:03:39.066647 | orchestrator | 2025-09-27 22:03:39 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:03:39.066686 | orchestrator | 2025-09-27 22:03:39 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:03:42.108885 | orchestrator | 2025-09-27 22:03:42 | INFO  | Task b6aaff37-0b97-4786-97a1-6756429feba3 is in state SUCCESS 2025-09-27 22:03:42.110635 | orchestrator | 2025-09-27 22:03:42.110830 | orchestrator | 2025-09-27 22:03:42.110851 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-27 22:03:42.110864 | orchestrator | 2025-09-27 22:03:42.110876 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-27 22:03:42.110888 | orchestrator | Saturday 27 September 2025 22:00:07 +0000 (0:00:00.205) 0:00:00.205 **** 2025-09-27 22:03:42.110899 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:03:42.110912 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:03:42.110923 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:03:42.110933 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:03:42.110944 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:03:42.110954 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:03:42.110965 | orchestrator | 2025-09-27 22:03:42.110976 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-27 22:03:42.110987 | orchestrator | Saturday 27 September 2025 22:00:08 +0000 (0:00:00.803) 0:00:01.008 **** 2025-09-27 22:03:42.110998 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:03:42.111010 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:03:42.111021 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:03:42.111032 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.111042 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:42.111053 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:42.111064 | orchestrator | 2025-09-27 22:03:42.111075 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-27 22:03:42.111086 | orchestrator | Saturday 27 September 2025 22:00:09 +0000 (0:00:00.788) 0:00:01.797 **** 2025-09-27 22:03:42.111096 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:03:42.111107 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:03:42.111118 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:03:42.111128 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.111139 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:42.111149 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:42.111160 | orchestrator | 2025-09-27 22:03:42.111171 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-27 22:03:42.111182 | orchestrator | Saturday 27 September 2025 22:00:10 +0000 (0:00:00.792) 0:00:02.590 **** 2025-09-27 22:03:42.111192 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:03:42.111203 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:03:42.111214 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:03:42.111251 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:03:42.111262 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:03:42.111273 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:03:42.111283 | orchestrator | 2025-09-27 22:03:42.111331 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-27 22:03:42.111342 | orchestrator | Saturday 27 September 2025 22:00:12 +0000 (0:00:02.173) 0:00:04.763 **** 2025-09-27 22:03:42.111353 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:03:42.111364 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:03:42.111375 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:03:42.111385 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:03:42.111396 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:03:42.111406 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:03:42.111417 | orchestrator | 2025-09-27 22:03:42.111428 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-27 22:03:42.111439 | orchestrator | Saturday 27 September 2025 22:00:13 +0000 (0:00:01.405) 0:00:06.169 **** 2025-09-27 22:03:42.111449 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:03:42.111462 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:03:42.111475 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:03:42.111487 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:03:42.111500 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:03:42.111512 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:03:42.111524 | orchestrator | 2025-09-27 22:03:42.111537 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-27 22:03:42.111549 | orchestrator | Saturday 27 September 2025 22:00:14 +0000 (0:00:01.196) 0:00:07.366 **** 2025-09-27 22:03:42.111563 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:03:42.111574 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:03:42.111587 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:03:42.111599 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.111611 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:42.111623 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:42.111635 | orchestrator | 2025-09-27 22:03:42.111725 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-27 22:03:42.111743 | orchestrator | Saturday 27 September 2025 22:00:15 +0000 (0:00:00.706) 0:00:08.072 **** 2025-09-27 22:03:42.111756 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:03:42.111785 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:03:42.111798 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:03:42.111810 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.111822 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:42.111832 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:42.111843 | orchestrator | 2025-09-27 22:03:42.111854 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-27 22:03:42.111864 | orchestrator | Saturday 27 September 2025 22:00:16 +0000 (0:00:00.936) 0:00:09.009 **** 2025-09-27 22:03:42.111875 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-27 22:03:42.111886 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-27 22:03:42.111896 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:03:42.111908 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-27 22:03:42.111918 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-27 22:03:42.111929 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:03:42.111940 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-27 22:03:42.111951 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-27 22:03:42.111961 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:03:42.111973 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-27 22:03:42.111996 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-27 22:03:42.112022 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.112034 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-27 22:03:42.112044 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-27 22:03:42.112055 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:42.112066 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-27 22:03:42.112077 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-27 22:03:42.112087 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:42.112098 | orchestrator | 2025-09-27 22:03:42.112109 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-27 22:03:42.112119 | orchestrator | Saturday 27 September 2025 22:00:17 +0000 (0:00:00.694) 0:00:09.703 **** 2025-09-27 22:03:42.112130 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:03:42.112141 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:03:42.112152 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:03:42.112163 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.112173 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:42.112184 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:42.112195 | orchestrator | 2025-09-27 22:03:42.112206 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-27 22:03:42.112217 | orchestrator | Saturday 27 September 2025 22:00:18 +0000 (0:00:01.158) 0:00:10.862 **** 2025-09-27 22:03:42.112228 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:03:42.112239 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:03:42.112250 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:03:42.112261 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:03:42.112272 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:03:42.112282 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:03:42.112317 | orchestrator | 2025-09-27 22:03:42.112328 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-27 22:03:42.112339 | orchestrator | Saturday 27 September 2025 22:00:19 +0000 (0:00:00.828) 0:00:11.690 **** 2025-09-27 22:03:42.112350 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:03:42.112361 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:03:42.112372 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:03:42.112382 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:03:42.112393 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:03:42.112404 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:03:42.112414 | orchestrator | 2025-09-27 22:03:42.112425 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-27 22:03:42.112436 | orchestrator | Saturday 27 September 2025 22:00:24 +0000 (0:00:05.751) 0:00:17.442 **** 2025-09-27 22:03:42.112447 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:03:42.112457 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:03:42.112468 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:03:42.112479 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:42.112490 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:42.112500 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.112511 | orchestrator | 2025-09-27 22:03:42.112522 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-27 22:03:42.112533 | orchestrator | Saturday 27 September 2025 22:00:26 +0000 (0:00:01.634) 0:00:19.076 **** 2025-09-27 22:03:42.112544 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:03:42.112554 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:03:42.112565 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.112576 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:03:42.112587 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:42.112597 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:42.112608 | orchestrator | 2025-09-27 22:03:42.112619 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-27 22:03:42.112639 | orchestrator | Saturday 27 September 2025 22:00:28 +0000 (0:00:01.639) 0:00:20.716 **** 2025-09-27 22:03:42.112650 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:03:42.112660 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:03:42.112671 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:03:42.112771 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:03:42.112786 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:03:42.112797 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:03:42.112808 | orchestrator | 2025-09-27 22:03:42.112819 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-27 22:03:42.112830 | orchestrator | Saturday 27 September 2025 22:00:28 +0000 (0:00:00.756) 0:00:21.472 **** 2025-09-27 22:03:42.112847 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-27 22:03:42.112860 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-27 22:03:42.112871 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-27 22:03:42.112881 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-27 22:03:42.112892 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-27 22:03:42.112903 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-27 22:03:42.112914 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-27 22:03:42.112925 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-27 22:03:42.112936 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-27 22:03:42.112947 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-27 22:03:42.112957 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-27 22:03:42.112968 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-27 22:03:42.112979 | orchestrator | 2025-09-27 22:03:42.112990 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-27 22:03:42.113001 | orchestrator | Saturday 27 September 2025 22:00:30 +0000 (0:00:01.507) 0:00:22.980 **** 2025-09-27 22:03:42.113012 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:03:42.113022 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:03:42.113033 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:03:42.113044 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:03:42.113055 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:03:42.113065 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:03:42.113076 | orchestrator | 2025-09-27 22:03:42.113098 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-27 22:03:42.113109 | orchestrator | 2025-09-27 22:03:42.113120 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-27 22:03:42.113131 | orchestrator | Saturday 27 September 2025 22:00:31 +0000 (0:00:01.454) 0:00:24.435 **** 2025-09-27 22:03:42.113141 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:03:42.113152 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:03:42.113163 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:03:42.113173 | orchestrator | 2025-09-27 22:03:42.113184 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-27 22:03:42.113195 | orchestrator | Saturday 27 September 2025 22:00:33 +0000 (0:00:01.292) 0:00:25.727 **** 2025-09-27 22:03:42.113206 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:03:42.113216 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:03:42.113227 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:03:42.113238 | orchestrator | 2025-09-27 22:03:42.113248 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-27 22:03:42.113259 | orchestrator | Saturday 27 September 2025 22:00:34 +0000 (0:00:01.158) 0:00:26.886 **** 2025-09-27 22:03:42.113270 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:03:42.113280 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:03:42.113347 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:03:42.113360 | orchestrator | 2025-09-27 22:03:42.113371 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-27 22:03:42.113382 | orchestrator | Saturday 27 September 2025 22:00:35 +0000 (0:00:00.850) 0:00:27.736 **** 2025-09-27 22:03:42.113400 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:03:42.113414 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:03:42.113427 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:03:42.113439 | orchestrator | 2025-09-27 22:03:42.113452 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-27 22:03:42.113465 | orchestrator | Saturday 27 September 2025 22:00:36 +0000 (0:00:00.980) 0:00:28.717 **** 2025-09-27 22:03:42.113476 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.113487 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:42.113499 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:42.113510 | orchestrator | 2025-09-27 22:03:42.113521 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-27 22:03:42.113532 | orchestrator | Saturday 27 September 2025 22:00:36 +0000 (0:00:00.312) 0:00:29.030 **** 2025-09-27 22:03:42.113543 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:03:42.113554 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:03:42.113565 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:03:42.113576 | orchestrator | 2025-09-27 22:03:42.113588 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-27 22:03:42.113599 | orchestrator | Saturday 27 September 2025 22:00:37 +0000 (0:00:00.631) 0:00:29.662 **** 2025-09-27 22:03:42.113610 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:03:42.113621 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:03:42.113633 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:03:42.113643 | orchestrator | 2025-09-27 22:03:42.113655 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-27 22:03:42.113666 | orchestrator | Saturday 27 September 2025 22:00:38 +0000 (0:00:01.519) 0:00:31.181 **** 2025-09-27 22:03:42.113676 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:03:42.113685 | orchestrator | 2025-09-27 22:03:42.113695 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-27 22:03:42.113704 | orchestrator | Saturday 27 September 2025 22:00:39 +0000 (0:00:00.593) 0:00:31.775 **** 2025-09-27 22:03:42.113714 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:03:42.113724 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:03:42.113734 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:03:42.113743 | orchestrator | 2025-09-27 22:03:42.113753 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-27 22:03:42.113763 | orchestrator | Saturday 27 September 2025 22:00:41 +0000 (0:00:02.258) 0:00:34.034 **** 2025-09-27 22:03:42.113773 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:42.113782 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:42.113792 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:03:42.113802 | orchestrator | 2025-09-27 22:03:42.113811 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-27 22:03:42.113821 | orchestrator | Saturday 27 September 2025 22:00:42 +0000 (0:00:00.741) 0:00:34.775 **** 2025-09-27 22:03:42.113831 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:42.113840 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:42.113850 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:03:42.113859 | orchestrator | 2025-09-27 22:03:42.113874 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-27 22:03:42.113884 | orchestrator | Saturday 27 September 2025 22:00:43 +0000 (0:00:01.176) 0:00:35.952 **** 2025-09-27 22:03:42.113894 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:42.113903 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:42.113913 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:03:42.113923 | orchestrator | 2025-09-27 22:03:42.113932 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-27 22:03:42.113942 | orchestrator | Saturday 27 September 2025 22:00:45 +0000 (0:00:01.747) 0:00:37.699 **** 2025-09-27 22:03:42.113951 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.113967 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:42.113976 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:42.113986 | orchestrator | 2025-09-27 22:03:42.114167 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-27 22:03:42.114180 | orchestrator | Saturday 27 September 2025 22:00:45 +0000 (0:00:00.451) 0:00:38.151 **** 2025-09-27 22:03:42.114190 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.114200 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:42.114210 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:42.114220 | orchestrator | 2025-09-27 22:03:42.114230 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-27 22:03:42.114240 | orchestrator | Saturday 27 September 2025 22:00:46 +0000 (0:00:00.494) 0:00:38.646 **** 2025-09-27 22:03:42.114250 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:03:42.114260 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:03:42.114269 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:03:42.114280 | orchestrator | 2025-09-27 22:03:42.114315 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-27 22:03:42.114326 | orchestrator | Saturday 27 September 2025 22:00:48 +0000 (0:00:02.394) 0:00:41.040 **** 2025-09-27 22:03:42.114336 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-27 22:03:42.114347 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-27 22:03:42.114357 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-27 22:03:42.114367 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-27 22:03:42.114376 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-27 22:03:42.114386 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-27 22:03:42.114396 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-27 22:03:42.114405 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-27 22:03:42.114415 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-27 22:03:42.114425 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-27 22:03:42.114435 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-27 22:03:42.114444 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-27 22:03:42.114454 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-27 22:03:42.114464 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-27 22:03:42.114473 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-27 22:03:42.114483 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:03:42.114493 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:03:42.114503 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:03:42.114521 | orchestrator | 2025-09-27 22:03:42.114531 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-27 22:03:42.114540 | orchestrator | Saturday 27 September 2025 22:01:44 +0000 (0:00:55.653) 0:01:36.694 **** 2025-09-27 22:03:42.114550 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.114560 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:42.114569 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:42.114579 | orchestrator | 2025-09-27 22:03:42.114589 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-27 22:03:42.114598 | orchestrator | Saturday 27 September 2025 22:01:44 +0000 (0:00:00.261) 0:01:36.955 **** 2025-09-27 22:03:42.114608 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:03:42.114618 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:03:42.114627 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:03:42.114637 | orchestrator | 2025-09-27 22:03:42.114652 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-27 22:03:42.114662 | orchestrator | Saturday 27 September 2025 22:01:45 +0000 (0:00:00.883) 0:01:37.839 **** 2025-09-27 22:03:42.114672 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:03:42.114682 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:03:42.114691 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:03:42.114701 | orchestrator | 2025-09-27 22:03:42.114710 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-27 22:03:42.114720 | orchestrator | Saturday 27 September 2025 22:01:46 +0000 (0:00:01.070) 0:01:38.909 **** 2025-09-27 22:03:42.114730 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:03:42.114739 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:03:42.114749 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:03:42.114759 | orchestrator | 2025-09-27 22:03:42.114768 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-27 22:03:42.114778 | orchestrator | Saturday 27 September 2025 22:02:13 +0000 (0:00:26.710) 0:02:05.620 **** 2025-09-27 22:03:42.114788 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:03:42.114799 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:03:42.114810 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:03:42.114820 | orchestrator | 2025-09-27 22:03:42.114831 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-27 22:03:42.114842 | orchestrator | Saturday 27 September 2025 22:02:13 +0000 (0:00:00.598) 0:02:06.219 **** 2025-09-27 22:03:42.114853 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:03:42.114864 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:03:42.114875 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:03:42.114886 | orchestrator | 2025-09-27 22:03:42.114903 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-27 22:03:42.114914 | orchestrator | Saturday 27 September 2025 22:02:14 +0000 (0:00:00.685) 0:02:06.904 **** 2025-09-27 22:03:42.114924 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:03:42.114934 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:03:42.114943 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:03:42.114953 | orchestrator | 2025-09-27 22:03:42.114962 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-27 22:03:42.114972 | orchestrator | Saturday 27 September 2025 22:02:15 +0000 (0:00:00.631) 0:02:07.536 **** 2025-09-27 22:03:42.114982 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:03:42.114991 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:03:42.115001 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:03:42.115011 | orchestrator | 2025-09-27 22:03:42.115020 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-27 22:03:42.115030 | orchestrator | Saturday 27 September 2025 22:02:15 +0000 (0:00:00.761) 0:02:08.298 **** 2025-09-27 22:03:42.115040 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:03:42.115049 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:03:42.115059 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:03:42.115068 | orchestrator | 2025-09-27 22:03:42.115078 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-27 22:03:42.115096 | orchestrator | Saturday 27 September 2025 22:02:16 +0000 (0:00:00.344) 0:02:08.642 **** 2025-09-27 22:03:42.115106 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:03:42.115116 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:03:42.115126 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:03:42.115135 | orchestrator | 2025-09-27 22:03:42.115145 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-27 22:03:42.115155 | orchestrator | Saturday 27 September 2025 22:02:16 +0000 (0:00:00.623) 0:02:09.266 **** 2025-09-27 22:03:42.115164 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:03:42.115174 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:03:42.115183 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:03:42.115193 | orchestrator | 2025-09-27 22:03:42.115203 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-27 22:03:42.115213 | orchestrator | Saturday 27 September 2025 22:02:17 +0000 (0:00:00.616) 0:02:09.882 **** 2025-09-27 22:03:42.115222 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:03:42.115232 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:03:42.115242 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:03:42.115251 | orchestrator | 2025-09-27 22:03:42.115261 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-27 22:03:42.115271 | orchestrator | Saturday 27 September 2025 22:02:18 +0000 (0:00:01.105) 0:02:10.988 **** 2025-09-27 22:03:42.115280 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:03:42.115337 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:03:42.115349 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:03:42.115358 | orchestrator | 2025-09-27 22:03:42.115368 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-27 22:03:42.115378 | orchestrator | Saturday 27 September 2025 22:02:19 +0000 (0:00:00.912) 0:02:11.900 **** 2025-09-27 22:03:42.115388 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.115397 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:42.115407 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:42.115416 | orchestrator | 2025-09-27 22:03:42.115426 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-27 22:03:42.115436 | orchestrator | Saturday 27 September 2025 22:02:19 +0000 (0:00:00.307) 0:02:12.207 **** 2025-09-27 22:03:42.115446 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.115455 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:42.115465 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:42.115474 | orchestrator | 2025-09-27 22:03:42.115484 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-27 22:03:42.115494 | orchestrator | Saturday 27 September 2025 22:02:19 +0000 (0:00:00.285) 0:02:12.493 **** 2025-09-27 22:03:42.115503 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:03:42.115513 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:03:42.115523 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:03:42.115532 | orchestrator | 2025-09-27 22:03:42.115542 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-27 22:03:42.115552 | orchestrator | Saturday 27 September 2025 22:02:20 +0000 (0:00:00.954) 0:02:13.448 **** 2025-09-27 22:03:42.115561 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:03:42.115571 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:03:42.115580 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:03:42.115590 | orchestrator | 2025-09-27 22:03:42.115605 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-27 22:03:42.115615 | orchestrator | Saturday 27 September 2025 22:02:21 +0000 (0:00:00.692) 0:02:14.140 **** 2025-09-27 22:03:42.115625 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-27 22:03:42.115635 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-27 22:03:42.115644 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-27 22:03:42.115662 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-27 22:03:42.115671 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-27 22:03:42.115681 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-27 22:03:42.115689 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-27 22:03:42.115696 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-27 22:03:42.115705 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-27 22:03:42.115717 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-27 22:03:42.115726 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-27 22:03:42.115734 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-27 22:03:42.115741 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-27 22:03:42.115749 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-27 22:03:42.115757 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-27 22:03:42.115765 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-27 22:03:42.115773 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-27 22:03:42.115781 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-27 22:03:42.115789 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-27 22:03:42.115797 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-27 22:03:42.115805 | orchestrator | 2025-09-27 22:03:42.115813 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-27 22:03:42.115820 | orchestrator | 2025-09-27 22:03:42.115828 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-27 22:03:42.115836 | orchestrator | Saturday 27 September 2025 22:02:24 +0000 (0:00:02.916) 0:02:17.056 **** 2025-09-27 22:03:42.115844 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:03:42.115852 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:03:42.115860 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:03:42.115880 | orchestrator | 2025-09-27 22:03:42.115888 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-27 22:03:42.115896 | orchestrator | Saturday 27 September 2025 22:02:25 +0000 (0:00:00.483) 0:02:17.540 **** 2025-09-27 22:03:42.115914 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:03:42.115922 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:03:42.115930 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:03:42.115938 | orchestrator | 2025-09-27 22:03:42.115946 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-27 22:03:42.115953 | orchestrator | Saturday 27 September 2025 22:02:25 +0000 (0:00:00.627) 0:02:18.167 **** 2025-09-27 22:03:42.115961 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:03:42.115969 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:03:42.115977 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:03:42.115985 | orchestrator | 2025-09-27 22:03:42.115993 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-27 22:03:42.116000 | orchestrator | Saturday 27 September 2025 22:02:25 +0000 (0:00:00.287) 0:02:18.455 **** 2025-09-27 22:03:42.116008 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:03:42.116016 | orchestrator | 2025-09-27 22:03:42.116024 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-27 22:03:42.116037 | orchestrator | Saturday 27 September 2025 22:02:26 +0000 (0:00:00.629) 0:02:19.084 **** 2025-09-27 22:03:42.116045 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:03:42.116053 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:03:42.116061 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:03:42.116069 | orchestrator | 2025-09-27 22:03:42.116077 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-27 22:03:42.116085 | orchestrator | Saturday 27 September 2025 22:02:26 +0000 (0:00:00.318) 0:02:19.403 **** 2025-09-27 22:03:42.116093 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:03:42.116101 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:03:42.116108 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:03:42.116116 | orchestrator | 2025-09-27 22:03:42.116124 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-27 22:03:42.116132 | orchestrator | Saturday 27 September 2025 22:02:27 +0000 (0:00:00.283) 0:02:19.686 **** 2025-09-27 22:03:42.116140 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:03:42.116148 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:03:42.116156 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:03:42.116164 | orchestrator | 2025-09-27 22:03:42.116172 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-27 22:03:42.116180 | orchestrator | Saturday 27 September 2025 22:02:27 +0000 (0:00:00.277) 0:02:19.964 **** 2025-09-27 22:03:42.116189 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:03:42.116196 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:03:42.116204 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:03:42.116212 | orchestrator | 2025-09-27 22:03:42.116220 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-27 22:03:42.116228 | orchestrator | Saturday 27 September 2025 22:02:28 +0000 (0:00:00.796) 0:02:20.760 **** 2025-09-27 22:03:42.116236 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:03:42.116244 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:03:42.116252 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:03:42.116260 | orchestrator | 2025-09-27 22:03:42.116267 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-27 22:03:42.116835 | orchestrator | Saturday 27 September 2025 22:02:29 +0000 (0:00:01.188) 0:02:21.949 **** 2025-09-27 22:03:42.116851 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:03:42.116859 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:03:42.116867 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:03:42.116875 | orchestrator | 2025-09-27 22:03:42.116883 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-27 22:03:42.116891 | orchestrator | Saturday 27 September 2025 22:02:30 +0000 (0:00:01.233) 0:02:23.182 **** 2025-09-27 22:03:42.116899 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:03:42.116907 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:03:42.116915 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:03:42.116923 | orchestrator | 2025-09-27 22:03:42.116938 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-27 22:03:42.116946 | orchestrator | 2025-09-27 22:03:42.116954 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-27 22:03:42.116963 | orchestrator | Saturday 27 September 2025 22:02:42 +0000 (0:00:12.054) 0:02:35.237 **** 2025-09-27 22:03:42.116971 | orchestrator | ok: [testbed-manager] 2025-09-27 22:03:42.116979 | orchestrator | 2025-09-27 22:03:42.116987 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-27 22:03:42.116995 | orchestrator | Saturday 27 September 2025 22:02:43 +0000 (0:00:01.151) 0:02:36.388 **** 2025-09-27 22:03:42.117003 | orchestrator | changed: [testbed-manager] 2025-09-27 22:03:42.117010 | orchestrator | 2025-09-27 22:03:42.117018 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-27 22:03:42.117026 | orchestrator | Saturday 27 September 2025 22:02:44 +0000 (0:00:00.397) 0:02:36.786 **** 2025-09-27 22:03:42.117041 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-27 22:03:42.117049 | orchestrator | 2025-09-27 22:03:42.117057 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-27 22:03:42.117065 | orchestrator | Saturday 27 September 2025 22:02:44 +0000 (0:00:00.507) 0:02:37.293 **** 2025-09-27 22:03:42.117073 | orchestrator | changed: [testbed-manager] 2025-09-27 22:03:42.117080 | orchestrator | 2025-09-27 22:03:42.117088 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-27 22:03:42.117096 | orchestrator | Saturday 27 September 2025 22:02:45 +0000 (0:00:00.763) 0:02:38.056 **** 2025-09-27 22:03:42.117104 | orchestrator | changed: [testbed-manager] 2025-09-27 22:03:42.117112 | orchestrator | 2025-09-27 22:03:42.117120 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-27 22:03:42.117128 | orchestrator | Saturday 27 September 2025 22:02:46 +0000 (0:00:00.620) 0:02:38.676 **** 2025-09-27 22:03:42.117136 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-27 22:03:42.117143 | orchestrator | 2025-09-27 22:03:42.117151 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-27 22:03:42.117159 | orchestrator | Saturday 27 September 2025 22:02:47 +0000 (0:00:01.293) 0:02:39.970 **** 2025-09-27 22:03:42.117167 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-27 22:03:42.117175 | orchestrator | 2025-09-27 22:03:42.117182 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-27 22:03:42.117190 | orchestrator | Saturday 27 September 2025 22:02:48 +0000 (0:00:00.743) 0:02:40.713 **** 2025-09-27 22:03:42.117198 | orchestrator | changed: [testbed-manager] 2025-09-27 22:03:42.117206 | orchestrator | 2025-09-27 22:03:42.117214 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-27 22:03:42.117222 | orchestrator | Saturday 27 September 2025 22:02:48 +0000 (0:00:00.404) 0:02:41.117 **** 2025-09-27 22:03:42.117234 | orchestrator | changed: [testbed-manager] 2025-09-27 22:03:42.117242 | orchestrator | 2025-09-27 22:03:42.117250 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-27 22:03:42.117258 | orchestrator | 2025-09-27 22:03:42.117266 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-27 22:03:42.117273 | orchestrator | Saturday 27 September 2025 22:02:49 +0000 (0:00:00.492) 0:02:41.609 **** 2025-09-27 22:03:42.117281 | orchestrator | ok: [testbed-manager] 2025-09-27 22:03:42.117305 | orchestrator | 2025-09-27 22:03:42.117313 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-27 22:03:42.117321 | orchestrator | Saturday 27 September 2025 22:02:49 +0000 (0:00:00.113) 0:02:41.722 **** 2025-09-27 22:03:42.117329 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-27 22:03:42.117337 | orchestrator | 2025-09-27 22:03:42.117345 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-27 22:03:42.117352 | orchestrator | Saturday 27 September 2025 22:02:49 +0000 (0:00:00.216) 0:02:41.940 **** 2025-09-27 22:03:42.117360 | orchestrator | ok: [testbed-manager] 2025-09-27 22:03:42.117368 | orchestrator | 2025-09-27 22:03:42.117376 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-27 22:03:42.117384 | orchestrator | Saturday 27 September 2025 22:02:50 +0000 (0:00:00.644) 0:02:42.584 **** 2025-09-27 22:03:42.117392 | orchestrator | ok: [testbed-manager] 2025-09-27 22:03:42.117400 | orchestrator | 2025-09-27 22:03:42.117408 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-27 22:03:42.117415 | orchestrator | Saturday 27 September 2025 22:02:51 +0000 (0:00:01.458) 0:02:44.043 **** 2025-09-27 22:03:42.117423 | orchestrator | changed: [testbed-manager] 2025-09-27 22:03:42.117431 | orchestrator | 2025-09-27 22:03:42.117439 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-27 22:03:42.117447 | orchestrator | Saturday 27 September 2025 22:02:52 +0000 (0:00:01.073) 0:02:45.116 **** 2025-09-27 22:03:42.117455 | orchestrator | ok: [testbed-manager] 2025-09-27 22:03:42.117471 | orchestrator | 2025-09-27 22:03:42.117479 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-27 22:03:42.117487 | orchestrator | Saturday 27 September 2025 22:02:53 +0000 (0:00:00.470) 0:02:45.586 **** 2025-09-27 22:03:42.117495 | orchestrator | changed: [testbed-manager] 2025-09-27 22:03:42.117503 | orchestrator | 2025-09-27 22:03:42.117511 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-27 22:03:42.117519 | orchestrator | Saturday 27 September 2025 22:02:59 +0000 (0:00:06.592) 0:02:52.179 **** 2025-09-27 22:03:42.117526 | orchestrator | changed: [testbed-manager] 2025-09-27 22:03:42.117534 | orchestrator | 2025-09-27 22:03:42.117542 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-27 22:03:42.117550 | orchestrator | Saturday 27 September 2025 22:03:10 +0000 (0:00:11.275) 0:03:03.455 **** 2025-09-27 22:03:42.117558 | orchestrator | ok: [testbed-manager] 2025-09-27 22:03:42.117566 | orchestrator | 2025-09-27 22:03:42.117574 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-27 22:03:42.117581 | orchestrator | 2025-09-27 22:03:42.117589 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-27 22:03:42.117602 | orchestrator | Saturday 27 September 2025 22:03:11 +0000 (0:00:00.480) 0:03:03.935 **** 2025-09-27 22:03:42.117610 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:03:42.117618 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:03:42.117626 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:03:42.117634 | orchestrator | 2025-09-27 22:03:42.117642 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-27 22:03:42.117650 | orchestrator | Saturday 27 September 2025 22:03:11 +0000 (0:00:00.409) 0:03:04.345 **** 2025-09-27 22:03:42.117657 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.117665 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:42.117673 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:42.117681 | orchestrator | 2025-09-27 22:03:42.117689 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-27 22:03:42.117697 | orchestrator | Saturday 27 September 2025 22:03:12 +0000 (0:00:00.265) 0:03:04.610 **** 2025-09-27 22:03:42.117705 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:03:42.117713 | orchestrator | 2025-09-27 22:03:42.117721 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-27 22:03:42.117728 | orchestrator | Saturday 27 September 2025 22:03:12 +0000 (0:00:00.625) 0:03:05.236 **** 2025-09-27 22:03:42.117736 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.117744 | orchestrator | 2025-09-27 22:03:42.117752 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-27 22:03:42.117760 | orchestrator | Saturday 27 September 2025 22:03:12 +0000 (0:00:00.236) 0:03:05.472 **** 2025-09-27 22:03:42.117768 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.117776 | orchestrator | 2025-09-27 22:03:42.117783 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-27 22:03:42.117791 | orchestrator | Saturday 27 September 2025 22:03:13 +0000 (0:00:00.202) 0:03:05.675 **** 2025-09-27 22:03:42.117799 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.117807 | orchestrator | 2025-09-27 22:03:42.117815 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-27 22:03:42.117823 | orchestrator | Saturday 27 September 2025 22:03:13 +0000 (0:00:00.172) 0:03:05.847 **** 2025-09-27 22:03:42.117830 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.117838 | orchestrator | 2025-09-27 22:03:42.117846 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-27 22:03:42.117854 | orchestrator | Saturday 27 September 2025 22:03:13 +0000 (0:00:00.186) 0:03:06.033 **** 2025-09-27 22:03:42.117862 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.117870 | orchestrator | 2025-09-27 22:03:42.117878 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-27 22:03:42.117891 | orchestrator | Saturday 27 September 2025 22:03:13 +0000 (0:00:00.195) 0:03:06.228 **** 2025-09-27 22:03:42.117899 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.117906 | orchestrator | 2025-09-27 22:03:42.117918 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-27 22:03:42.117926 | orchestrator | Saturday 27 September 2025 22:03:13 +0000 (0:00:00.193) 0:03:06.422 **** 2025-09-27 22:03:42.117934 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.117942 | orchestrator | 2025-09-27 22:03:42.117950 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-27 22:03:42.117957 | orchestrator | Saturday 27 September 2025 22:03:14 +0000 (0:00:00.202) 0:03:06.625 **** 2025-09-27 22:03:42.117965 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.117973 | orchestrator | 2025-09-27 22:03:42.117981 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-27 22:03:42.117989 | orchestrator | Saturday 27 September 2025 22:03:14 +0000 (0:00:00.369) 0:03:06.994 **** 2025-09-27 22:03:42.117997 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.118005 | orchestrator | 2025-09-27 22:03:42.118012 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-27 22:03:42.118077 | orchestrator | Saturday 27 September 2025 22:03:14 +0000 (0:00:00.173) 0:03:07.167 **** 2025-09-27 22:03:42.118091 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-27 22:03:42.118105 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-27 22:03:42.118124 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.118139 | orchestrator | 2025-09-27 22:03:42.118152 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-27 22:03:42.118165 | orchestrator | Saturday 27 September 2025 22:03:15 +0000 (0:00:00.830) 0:03:07.998 **** 2025-09-27 22:03:42.118179 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.118192 | orchestrator | 2025-09-27 22:03:42.118205 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-27 22:03:42.118219 | orchestrator | Saturday 27 September 2025 22:03:15 +0000 (0:00:00.218) 0:03:08.217 **** 2025-09-27 22:03:42.118230 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.118238 | orchestrator | 2025-09-27 22:03:42.118246 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-27 22:03:42.118254 | orchestrator | Saturday 27 September 2025 22:03:15 +0000 (0:00:00.173) 0:03:08.391 **** 2025-09-27 22:03:42.118262 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.118269 | orchestrator | 2025-09-27 22:03:42.118277 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-27 22:03:42.118285 | orchestrator | Saturday 27 September 2025 22:03:16 +0000 (0:00:00.172) 0:03:08.563 **** 2025-09-27 22:03:42.118349 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.118359 | orchestrator | 2025-09-27 22:03:42.118367 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-27 22:03:42.118375 | orchestrator | Saturday 27 September 2025 22:03:16 +0000 (0:00:00.247) 0:03:08.810 **** 2025-09-27 22:03:42.118383 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.118391 | orchestrator | 2025-09-27 22:03:42.118399 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-27 22:03:42.118407 | orchestrator | Saturday 27 September 2025 22:03:16 +0000 (0:00:00.292) 0:03:09.103 **** 2025-09-27 22:03:42.118415 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.118422 | orchestrator | 2025-09-27 22:03:42.118430 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-27 22:03:42.118446 | orchestrator | Saturday 27 September 2025 22:03:16 +0000 (0:00:00.196) 0:03:09.299 **** 2025-09-27 22:03:42.118454 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.118462 | orchestrator | 2025-09-27 22:03:42.118470 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-27 22:03:42.118477 | orchestrator | Saturday 27 September 2025 22:03:16 +0000 (0:00:00.154) 0:03:09.453 **** 2025-09-27 22:03:42.118494 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.118502 | orchestrator | 2025-09-27 22:03:42.118510 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-27 22:03:42.118518 | orchestrator | Saturday 27 September 2025 22:03:17 +0000 (0:00:00.179) 0:03:09.633 **** 2025-09-27 22:03:42.118526 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.118533 | orchestrator | 2025-09-27 22:03:42.118541 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-27 22:03:42.118549 | orchestrator | Saturday 27 September 2025 22:03:17 +0000 (0:00:00.139) 0:03:09.772 **** 2025-09-27 22:03:42.118557 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.118565 | orchestrator | 2025-09-27 22:03:42.118572 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-27 22:03:42.118580 | orchestrator | Saturday 27 September 2025 22:03:17 +0000 (0:00:00.144) 0:03:09.916 **** 2025-09-27 22:03:42.118588 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.118596 | orchestrator | 2025-09-27 22:03:42.118604 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-27 22:03:42.118611 | orchestrator | Saturday 27 September 2025 22:03:17 +0000 (0:00:00.166) 0:03:10.083 **** 2025-09-27 22:03:42.118619 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-27 22:03:42.118627 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-27 22:03:42.118635 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-27 22:03:42.118643 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-27 22:03:42.118651 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.118658 | orchestrator | 2025-09-27 22:03:42.118666 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-27 22:03:42.118674 | orchestrator | Saturday 27 September 2025 22:03:18 +0000 (0:00:00.790) 0:03:10.873 **** 2025-09-27 22:03:42.118682 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.118690 | orchestrator | 2025-09-27 22:03:42.118697 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-27 22:03:42.118705 | orchestrator | Saturday 27 September 2025 22:03:18 +0000 (0:00:00.183) 0:03:11.057 **** 2025-09-27 22:03:42.118713 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.118721 | orchestrator | 2025-09-27 22:03:42.118734 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-27 22:03:42.118742 | orchestrator | Saturday 27 September 2025 22:03:18 +0000 (0:00:00.182) 0:03:11.239 **** 2025-09-27 22:03:42.118749 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.118757 | orchestrator | 2025-09-27 22:03:42.118765 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-27 22:03:42.118773 | orchestrator | Saturday 27 September 2025 22:03:18 +0000 (0:00:00.176) 0:03:11.415 **** 2025-09-27 22:03:42.118781 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.118788 | orchestrator | 2025-09-27 22:03:42.118796 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-27 22:03:42.118804 | orchestrator | Saturday 27 September 2025 22:03:19 +0000 (0:00:00.186) 0:03:11.602 **** 2025-09-27 22:03:42.118812 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-27 22:03:42.118819 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-27 22:03:42.118827 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.118835 | orchestrator | 2025-09-27 22:03:42.118843 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-27 22:03:42.118851 | orchestrator | Saturday 27 September 2025 22:03:19 +0000 (0:00:00.221) 0:03:11.824 **** 2025-09-27 22:03:42.118859 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.118866 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:42.118872 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:42.118879 | orchestrator | 2025-09-27 22:03:42.118886 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-27 22:03:42.118897 | orchestrator | Saturday 27 September 2025 22:03:19 +0000 (0:00:00.239) 0:03:12.064 **** 2025-09-27 22:03:42.118904 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:03:42.118910 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:03:42.118917 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:03:42.118924 | orchestrator | 2025-09-27 22:03:42.118930 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-27 22:03:42.118937 | orchestrator | 2025-09-27 22:03:42.118944 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-27 22:03:42.118950 | orchestrator | Saturday 27 September 2025 22:03:20 +0000 (0:00:01.067) 0:03:13.131 **** 2025-09-27 22:03:42.118957 | orchestrator | ok: [testbed-manager] 2025-09-27 22:03:42.118963 | orchestrator | 2025-09-27 22:03:42.118970 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-27 22:03:42.118976 | orchestrator | Saturday 27 September 2025 22:03:20 +0000 (0:00:00.129) 0:03:13.261 **** 2025-09-27 22:03:42.118983 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-27 22:03:42.118990 | orchestrator | 2025-09-27 22:03:42.118996 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-27 22:03:42.119003 | orchestrator | Saturday 27 September 2025 22:03:20 +0000 (0:00:00.181) 0:03:13.443 **** 2025-09-27 22:03:42.119009 | orchestrator | changed: [testbed-manager] 2025-09-27 22:03:42.119016 | orchestrator | 2025-09-27 22:03:42.119023 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-27 22:03:42.119029 | orchestrator | 2025-09-27 22:03:42.119036 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-27 22:03:42.119046 | orchestrator | Saturday 27 September 2025 22:03:26 +0000 (0:00:05.406) 0:03:18.849 **** 2025-09-27 22:03:42.119053 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:03:42.119060 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:03:42.119066 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:03:42.119073 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:03:42.119079 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:03:42.119086 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:03:42.119092 | orchestrator | 2025-09-27 22:03:42.119099 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-27 22:03:42.119106 | orchestrator | Saturday 27 September 2025 22:03:27 +0000 (0:00:00.810) 0:03:19.660 **** 2025-09-27 22:03:42.119112 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-27 22:03:42.119119 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-27 22:03:42.119126 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-27 22:03:42.119132 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-27 22:03:42.119139 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-27 22:03:42.119160 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-27 22:03:42.119176 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-27 22:03:42.119199 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-27 22:03:42.119209 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-27 22:03:42.119220 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-27 22:03:42.119230 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-27 22:03:42.119241 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-27 22:03:42.119253 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-27 22:03:42.119274 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-27 22:03:42.119285 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-27 22:03:42.119316 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-27 22:03:42.119328 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-27 22:03:42.119335 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-27 22:03:42.119342 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-27 22:03:42.119349 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-27 22:03:42.119355 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-27 22:03:42.119362 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-27 22:03:42.119369 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-27 22:03:42.119375 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-27 22:03:42.119382 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-27 22:03:42.119389 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-27 22:03:42.119396 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-27 22:03:42.119402 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-27 22:03:42.119409 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-27 22:03:42.119416 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-27 22:03:42.119422 | orchestrator | 2025-09-27 22:03:42.119429 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-27 22:03:42.119436 | orchestrator | Saturday 27 September 2025 22:03:40 +0000 (0:00:13.036) 0:03:32.697 **** 2025-09-27 22:03:42.119442 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:03:42.119449 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:03:42.119456 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:03:42.119463 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.119469 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:42.119476 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:42.119483 | orchestrator | 2025-09-27 22:03:42.119489 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-27 22:03:42.119496 | orchestrator | Saturday 27 September 2025 22:03:40 +0000 (0:00:00.536) 0:03:33.233 **** 2025-09-27 22:03:42.119503 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:03:42.119510 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:03:42.119516 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:03:42.119523 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:03:42.119530 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:03:42.119537 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:03:42.119543 | orchestrator | 2025-09-27 22:03:42.119550 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:03:42.119563 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:03:42.119572 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-27 22:03:42.119579 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-27 22:03:42.119586 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-27 22:03:42.119599 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-27 22:03:42.119606 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-27 22:03:42.119612 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-27 22:03:42.119619 | orchestrator | 2025-09-27 22:03:42.119626 | orchestrator | 2025-09-27 22:03:42.119632 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:03:42.119639 | orchestrator | Saturday 27 September 2025 22:03:41 +0000 (0:00:00.380) 0:03:33.614 **** 2025-09-27 22:03:42.119646 | orchestrator | =============================================================================== 2025-09-27 22:03:42.119652 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.65s 2025-09-27 22:03:42.119659 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.71s 2025-09-27 22:03:42.119666 | orchestrator | Manage labels ---------------------------------------------------------- 13.04s 2025-09-27 22:03:42.119673 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.05s 2025-09-27 22:03:42.119679 | orchestrator | kubectl : Install required packages ------------------------------------ 11.28s 2025-09-27 22:03:42.119686 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.59s 2025-09-27 22:03:42.119693 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.75s 2025-09-27 22:03:42.119703 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.41s 2025-09-27 22:03:42.119710 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.92s 2025-09-27 22:03:42.119717 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.39s 2025-09-27 22:03:42.119723 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.26s 2025-09-27 22:03:42.119730 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.17s 2025-09-27 22:03:42.119737 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.75s 2025-09-27 22:03:42.119743 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.64s 2025-09-27 22:03:42.119750 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.63s 2025-09-27 22:03:42.119756 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.52s 2025-09-27 22:03:42.119763 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 1.51s 2025-09-27 22:03:42.119770 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.46s 2025-09-27 22:03:42.119776 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.45s 2025-09-27 22:03:42.119783 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.41s 2025-09-27 22:03:42.119790 | orchestrator | 2025-09-27 22:03:42 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:03:42.119796 | orchestrator | 2025-09-27 22:03:42 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:03:42.119803 | orchestrator | 2025-09-27 22:03:42 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:03:42.119810 | orchestrator | 2025-09-27 22:03:42 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:03:42.119817 | orchestrator | 2025-09-27 22:03:42 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:03:45.223539 | orchestrator | 2025-09-27 22:03:45 | INFO  | Task d357d07e-1486-4907-b8a0-73fe954bfc2b is in state STARTED 2025-09-27 22:03:45.224653 | orchestrator | 2025-09-27 22:03:45 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:03:45.224711 | orchestrator | 2025-09-27 22:03:45 | INFO  | Task 8a3f1aab-be1b-4f9b-83e0-43bf4a4fc333 is in state STARTED 2025-09-27 22:03:45.224732 | orchestrator | 2025-09-27 22:03:45 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:03:45.225375 | orchestrator | 2025-09-27 22:03:45 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:03:45.226122 | orchestrator | 2025-09-27 22:03:45 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:03:45.226174 | orchestrator | 2025-09-27 22:03:45 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:03:48.263880 | orchestrator | 2025-09-27 22:03:48 | INFO  | Task d357d07e-1486-4907-b8a0-73fe954bfc2b is in state STARTED 2025-09-27 22:03:48.266670 | orchestrator | 2025-09-27 22:03:48 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:03:48.267153 | orchestrator | 2025-09-27 22:03:48 | INFO  | Task 8a3f1aab-be1b-4f9b-83e0-43bf4a4fc333 is in state STARTED 2025-09-27 22:03:48.267586 | orchestrator | 2025-09-27 22:03:48 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:03:48.271665 | orchestrator | 2025-09-27 22:03:48 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:03:48.275823 | orchestrator | 2025-09-27 22:03:48 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:03:48.275872 | orchestrator | 2025-09-27 22:03:48 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:03:51.306620 | orchestrator | 2025-09-27 22:03:51 | INFO  | Task d357d07e-1486-4907-b8a0-73fe954bfc2b is in state STARTED 2025-09-27 22:03:51.307451 | orchestrator | 2025-09-27 22:03:51 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:03:51.308895 | orchestrator | 2025-09-27 22:03:51 | INFO  | Task 8a3f1aab-be1b-4f9b-83e0-43bf4a4fc333 is in state SUCCESS 2025-09-27 22:03:51.309568 | orchestrator | 2025-09-27 22:03:51 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:03:51.310532 | orchestrator | 2025-09-27 22:03:51 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:03:51.311724 | orchestrator | 2025-09-27 22:03:51 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:03:51.311776 | orchestrator | 2025-09-27 22:03:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:03:54.342962 | orchestrator | 2025-09-27 22:03:54 | INFO  | Task d357d07e-1486-4907-b8a0-73fe954bfc2b is in state SUCCESS 2025-09-27 22:03:54.343066 | orchestrator | 2025-09-27 22:03:54 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:03:54.343417 | orchestrator | 2025-09-27 22:03:54 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:03:54.344063 | orchestrator | 2025-09-27 22:03:54 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:03:54.344636 | orchestrator | 2025-09-27 22:03:54 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:03:54.344674 | orchestrator | 2025-09-27 22:03:54 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:03:57.385497 | orchestrator | 2025-09-27 22:03:57 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:03:57.385603 | orchestrator | 2025-09-27 22:03:57 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:03:57.386214 | orchestrator | 2025-09-27 22:03:57 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:03:57.387001 | orchestrator | 2025-09-27 22:03:57 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:03:57.387147 | orchestrator | 2025-09-27 22:03:57 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:04:00.421791 | orchestrator | 2025-09-27 22:04:00 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:04:00.423111 | orchestrator | 2025-09-27 22:04:00 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:04:00.423974 | orchestrator | 2025-09-27 22:04:00 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:04:00.425530 | orchestrator | 2025-09-27 22:04:00 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:04:00.425563 | orchestrator | 2025-09-27 22:04:00 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:04:03.472198 | orchestrator | 2025-09-27 22:04:03 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:04:03.473271 | orchestrator | 2025-09-27 22:04:03 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:04:03.474930 | orchestrator | 2025-09-27 22:04:03 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:04:03.477530 | orchestrator | 2025-09-27 22:04:03 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:04:03.477559 | orchestrator | 2025-09-27 22:04:03 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:04:06.510697 | orchestrator | 2025-09-27 22:04:06 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:04:06.512221 | orchestrator | 2025-09-27 22:04:06 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:04:06.513844 | orchestrator | 2025-09-27 22:04:06 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:04:06.515847 | orchestrator | 2025-09-27 22:04:06 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:04:06.515878 | orchestrator | 2025-09-27 22:04:06 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:04:09.578737 | orchestrator | 2025-09-27 22:04:09 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:04:09.585734 | orchestrator | 2025-09-27 22:04:09 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:04:09.587986 | orchestrator | 2025-09-27 22:04:09 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:04:09.592270 | orchestrator | 2025-09-27 22:04:09 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:04:09.592399 | orchestrator | 2025-09-27 22:04:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:04:12.666796 | orchestrator | 2025-09-27 22:04:12 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:04:12.670240 | orchestrator | 2025-09-27 22:04:12 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:04:12.672087 | orchestrator | 2025-09-27 22:04:12 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:04:12.674234 | orchestrator | 2025-09-27 22:04:12 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:04:12.674349 | orchestrator | 2025-09-27 22:04:12 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:04:15.717626 | orchestrator | 2025-09-27 22:04:15 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:04:15.718208 | orchestrator | 2025-09-27 22:04:15 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:04:15.719286 | orchestrator | 2025-09-27 22:04:15 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:04:15.720932 | orchestrator | 2025-09-27 22:04:15 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:04:15.721013 | orchestrator | 2025-09-27 22:04:15 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:04:18.763608 | orchestrator | 2025-09-27 22:04:18 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:04:18.767656 | orchestrator | 2025-09-27 22:04:18 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:04:18.770662 | orchestrator | 2025-09-27 22:04:18 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:04:18.772168 | orchestrator | 2025-09-27 22:04:18 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:04:18.772546 | orchestrator | 2025-09-27 22:04:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:04:21.813854 | orchestrator | 2025-09-27 22:04:21 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:04:21.814484 | orchestrator | 2025-09-27 22:04:21 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:04:21.815497 | orchestrator | 2025-09-27 22:04:21 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:04:21.816541 | orchestrator | 2025-09-27 22:04:21 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:04:21.816647 | orchestrator | 2025-09-27 22:04:21 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:04:24.850471 | orchestrator | 2025-09-27 22:04:24 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:04:24.850586 | orchestrator | 2025-09-27 22:04:24 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:04:24.851262 | orchestrator | 2025-09-27 22:04:24 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:04:24.852027 | orchestrator | 2025-09-27 22:04:24 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:04:24.852048 | orchestrator | 2025-09-27 22:04:24 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:04:27.897874 | orchestrator | 2025-09-27 22:04:27 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:04:27.899461 | orchestrator | 2025-09-27 22:04:27 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:04:27.901724 | orchestrator | 2025-09-27 22:04:27 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:04:27.903371 | orchestrator | 2025-09-27 22:04:27 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:04:27.903548 | orchestrator | 2025-09-27 22:04:27 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:04:30.952991 | orchestrator | 2025-09-27 22:04:30 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:04:30.953408 | orchestrator | 2025-09-27 22:04:30 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:04:30.956645 | orchestrator | 2025-09-27 22:04:30 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:04:30.957303 | orchestrator | 2025-09-27 22:04:30 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:04:30.957432 | orchestrator | 2025-09-27 22:04:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:04:33.985667 | orchestrator | 2025-09-27 22:04:33 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:04:33.986410 | orchestrator | 2025-09-27 22:04:33 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:04:33.987532 | orchestrator | 2025-09-27 22:04:33 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:04:33.989579 | orchestrator | 2025-09-27 22:04:33 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:04:33.989646 | orchestrator | 2025-09-27 22:04:33 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:04:37.035906 | orchestrator | 2025-09-27 22:04:37 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:04:37.036001 | orchestrator | 2025-09-27 22:04:37 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:04:37.036460 | orchestrator | 2025-09-27 22:04:37 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:04:37.037020 | orchestrator | 2025-09-27 22:04:37 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:04:37.037097 | orchestrator | 2025-09-27 22:04:37 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:04:40.077442 | orchestrator | 2025-09-27 22:04:40 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:04:40.078924 | orchestrator | 2025-09-27 22:04:40 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:04:40.079536 | orchestrator | 2025-09-27 22:04:40 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:04:40.080484 | orchestrator | 2025-09-27 22:04:40 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:04:40.080725 | orchestrator | 2025-09-27 22:04:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:04:43.113390 | orchestrator | 2025-09-27 22:04:43 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:04:43.113589 | orchestrator | 2025-09-27 22:04:43 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:04:43.116885 | orchestrator | 2025-09-27 22:04:43 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:04:43.117091 | orchestrator | 2025-09-27 22:04:43 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:04:43.117468 | orchestrator | 2025-09-27 22:04:43 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:04:46.149290 | orchestrator | 2025-09-27 22:04:46 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:04:46.150404 | orchestrator | 2025-09-27 22:04:46 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:04:46.151572 | orchestrator | 2025-09-27 22:04:46 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:04:46.152175 | orchestrator | 2025-09-27 22:04:46 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:04:46.152228 | orchestrator | 2025-09-27 22:04:46 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:04:49.177367 | orchestrator | 2025-09-27 22:04:49 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:04:49.177854 | orchestrator | 2025-09-27 22:04:49 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:04:49.179286 | orchestrator | 2025-09-27 22:04:49 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:04:49.182304 | orchestrator | 2025-09-27 22:04:49 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:04:49.182389 | orchestrator | 2025-09-27 22:04:49 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:04:52.222303 | orchestrator | 2025-09-27 22:04:52 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:04:52.223566 | orchestrator | 2025-09-27 22:04:52 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state STARTED 2025-09-27 22:04:52.225302 | orchestrator | 2025-09-27 22:04:52 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:04:52.227393 | orchestrator | 2025-09-27 22:04:52 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:04:52.227421 | orchestrator | 2025-09-27 22:04:52 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:04:55.260912 | orchestrator | 2025-09-27 22:04:55 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:04:55.261778 | orchestrator | 2025-09-27 22:04:55 | INFO  | Task 16755812-266a-4d3a-8332-8bdbdefa1705 is in state SUCCESS 2025-09-27 22:04:55.263241 | orchestrator | 2025-09-27 22:04:55.263360 | orchestrator | 2025-09-27 22:04:55.263378 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-27 22:04:55.263391 | orchestrator | 2025-09-27 22:04:55.263403 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-27 22:04:55.263415 | orchestrator | Saturday 27 September 2025 22:03:45 +0000 (0:00:00.145) 0:00:00.145 **** 2025-09-27 22:04:55.263426 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-27 22:04:55.263437 | orchestrator | 2025-09-27 22:04:55.263448 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-27 22:04:55.263474 | orchestrator | Saturday 27 September 2025 22:03:46 +0000 (0:00:00.791) 0:00:00.937 **** 2025-09-27 22:04:55.263486 | orchestrator | changed: [testbed-manager] 2025-09-27 22:04:55.263498 | orchestrator | 2025-09-27 22:04:55.263508 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-27 22:04:55.263519 | orchestrator | Saturday 27 September 2025 22:03:47 +0000 (0:00:01.068) 0:00:02.006 **** 2025-09-27 22:04:55.263530 | orchestrator | changed: [testbed-manager] 2025-09-27 22:04:55.263541 | orchestrator | 2025-09-27 22:04:55.263551 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:04:55.263562 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:04:55.263574 | orchestrator | 2025-09-27 22:04:55.263585 | orchestrator | 2025-09-27 22:04:55.263596 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:04:55.263606 | orchestrator | Saturday 27 September 2025 22:03:48 +0000 (0:00:00.696) 0:00:02.703 **** 2025-09-27 22:04:55.263617 | orchestrator | =============================================================================== 2025-09-27 22:04:55.263627 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.07s 2025-09-27 22:04:55.263638 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.79s 2025-09-27 22:04:55.263649 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.70s 2025-09-27 22:04:55.263659 | orchestrator | 2025-09-27 22:04:55.263670 | orchestrator | 2025-09-27 22:04:55.263681 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-27 22:04:55.263829 | orchestrator | 2025-09-27 22:04:55.263847 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-27 22:04:55.263858 | orchestrator | Saturday 27 September 2025 22:03:45 +0000 (0:00:00.236) 0:00:00.236 **** 2025-09-27 22:04:55.263892 | orchestrator | ok: [testbed-manager] 2025-09-27 22:04:55.263905 | orchestrator | 2025-09-27 22:04:55.263916 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-27 22:04:55.263928 | orchestrator | Saturday 27 September 2025 22:03:45 +0000 (0:00:00.752) 0:00:00.988 **** 2025-09-27 22:04:55.263939 | orchestrator | ok: [testbed-manager] 2025-09-27 22:04:55.263950 | orchestrator | 2025-09-27 22:04:55.263961 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-27 22:04:55.263973 | orchestrator | Saturday 27 September 2025 22:03:46 +0000 (0:00:00.482) 0:00:01.470 **** 2025-09-27 22:04:55.263984 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-27 22:04:55.263995 | orchestrator | 2025-09-27 22:04:55.264007 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-27 22:04:55.264018 | orchestrator | Saturday 27 September 2025 22:03:47 +0000 (0:00:00.679) 0:00:02.150 **** 2025-09-27 22:04:55.264029 | orchestrator | changed: [testbed-manager] 2025-09-27 22:04:55.264040 | orchestrator | 2025-09-27 22:04:55.264051 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-27 22:04:55.264062 | orchestrator | Saturday 27 September 2025 22:03:48 +0000 (0:00:01.112) 0:00:03.263 **** 2025-09-27 22:04:55.264074 | orchestrator | changed: [testbed-manager] 2025-09-27 22:04:55.264085 | orchestrator | 2025-09-27 22:04:55.264096 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-27 22:04:55.264108 | orchestrator | Saturday 27 September 2025 22:03:48 +0000 (0:00:00.658) 0:00:03.922 **** 2025-09-27 22:04:55.264119 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-27 22:04:55.264130 | orchestrator | 2025-09-27 22:04:55.264142 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-27 22:04:55.264154 | orchestrator | Saturday 27 September 2025 22:03:50 +0000 (0:00:01.259) 0:00:05.181 **** 2025-09-27 22:04:55.264165 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-27 22:04:55.264177 | orchestrator | 2025-09-27 22:04:55.264188 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-27 22:04:55.264200 | orchestrator | Saturday 27 September 2025 22:03:50 +0000 (0:00:00.642) 0:00:05.824 **** 2025-09-27 22:04:55.264211 | orchestrator | ok: [testbed-manager] 2025-09-27 22:04:55.264222 | orchestrator | 2025-09-27 22:04:55.264234 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-27 22:04:55.264246 | orchestrator | Saturday 27 September 2025 22:03:51 +0000 (0:00:00.411) 0:00:06.236 **** 2025-09-27 22:04:55.264257 | orchestrator | ok: [testbed-manager] 2025-09-27 22:04:55.264268 | orchestrator | 2025-09-27 22:04:55.264279 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:04:55.264291 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:04:55.264303 | orchestrator | 2025-09-27 22:04:55.264333 | orchestrator | 2025-09-27 22:04:55.264345 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:04:55.264356 | orchestrator | Saturday 27 September 2025 22:03:51 +0000 (0:00:00.263) 0:00:06.499 **** 2025-09-27 22:04:55.264368 | orchestrator | =============================================================================== 2025-09-27 22:04:55.264379 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.26s 2025-09-27 22:04:55.264390 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.12s 2025-09-27 22:04:55.264402 | orchestrator | Get home directory of operator user ------------------------------------- 0.75s 2025-09-27 22:04:55.264432 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.68s 2025-09-27 22:04:55.264445 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.66s 2025-09-27 22:04:55.264458 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.64s 2025-09-27 22:04:55.264471 | orchestrator | Create .kube directory -------------------------------------------------- 0.48s 2025-09-27 22:04:55.264492 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.41s 2025-09-27 22:04:55.264512 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.26s 2025-09-27 22:04:55.264525 | orchestrator | 2025-09-27 22:04:55.264539 | orchestrator | 2025-09-27 22:04:55.264552 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-27 22:04:55.264566 | orchestrator | 2025-09-27 22:04:55.264580 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-27 22:04:55.264593 | orchestrator | Saturday 27 September 2025 22:02:44 +0000 (0:00:00.200) 0:00:00.200 **** 2025-09-27 22:04:55.264608 | orchestrator | ok: [localhost] => { 2025-09-27 22:04:55.264622 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-27 22:04:55.264636 | orchestrator | } 2025-09-27 22:04:55.264649 | orchestrator | 2025-09-27 22:04:55.264664 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-27 22:04:55.264677 | orchestrator | Saturday 27 September 2025 22:02:45 +0000 (0:00:00.060) 0:00:00.260 **** 2025-09-27 22:04:55.264691 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-27 22:04:55.264705 | orchestrator | ...ignoring 2025-09-27 22:04:55.264719 | orchestrator | 2025-09-27 22:04:55.264732 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-27 22:04:55.264746 | orchestrator | Saturday 27 September 2025 22:02:48 +0000 (0:00:03.318) 0:00:03.579 **** 2025-09-27 22:04:55.264759 | orchestrator | skipping: [localhost] 2025-09-27 22:04:55.264771 | orchestrator | 2025-09-27 22:04:55.264786 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-27 22:04:55.264800 | orchestrator | Saturday 27 September 2025 22:02:48 +0000 (0:00:00.039) 0:00:03.618 **** 2025-09-27 22:04:55.264812 | orchestrator | ok: [localhost] 2025-09-27 22:04:55.264824 | orchestrator | 2025-09-27 22:04:55.264836 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:04:55.264848 | orchestrator | 2025-09-27 22:04:55.264859 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:04:55.264871 | orchestrator | Saturday 27 September 2025 22:02:48 +0000 (0:00:00.134) 0:00:03.752 **** 2025-09-27 22:04:55.264883 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:04:55.264895 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:04:55.264907 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:04:55.264919 | orchestrator | 2025-09-27 22:04:55.264930 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:04:55.264942 | orchestrator | Saturday 27 September 2025 22:02:48 +0000 (0:00:00.317) 0:00:04.070 **** 2025-09-27 22:04:55.264954 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-27 22:04:55.264966 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-27 22:04:55.264978 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-27 22:04:55.264990 | orchestrator | 2025-09-27 22:04:55.265001 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-27 22:04:55.265013 | orchestrator | 2025-09-27 22:04:55.265024 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-27 22:04:55.265036 | orchestrator | Saturday 27 September 2025 22:02:49 +0000 (0:00:00.483) 0:00:04.553 **** 2025-09-27 22:04:55.265048 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:04:55.265064 | orchestrator | 2025-09-27 22:04:55.265082 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-27 22:04:55.265111 | orchestrator | Saturday 27 September 2025 22:02:49 +0000 (0:00:00.485) 0:00:05.039 **** 2025-09-27 22:04:55.265133 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:04:55.265152 | orchestrator | 2025-09-27 22:04:55.265171 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-27 22:04:55.265202 | orchestrator | Saturday 27 September 2025 22:02:50 +0000 (0:00:00.941) 0:00:05.981 **** 2025-09-27 22:04:55.265221 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:04:55.265241 | orchestrator | 2025-09-27 22:04:55.265259 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-27 22:04:55.265278 | orchestrator | Saturday 27 September 2025 22:02:51 +0000 (0:00:00.461) 0:00:06.442 **** 2025-09-27 22:04:55.265297 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:04:55.265343 | orchestrator | 2025-09-27 22:04:55.265363 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-27 22:04:55.265381 | orchestrator | Saturday 27 September 2025 22:02:51 +0000 (0:00:00.633) 0:00:07.075 **** 2025-09-27 22:04:55.265401 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:04:55.265420 | orchestrator | 2025-09-27 22:04:55.265439 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-27 22:04:55.265454 | orchestrator | Saturday 27 September 2025 22:02:52 +0000 (0:00:00.522) 0:00:07.598 **** 2025-09-27 22:04:55.265465 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:04:55.265476 | orchestrator | 2025-09-27 22:04:55.265487 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-27 22:04:55.265497 | orchestrator | Saturday 27 September 2025 22:02:53 +0000 (0:00:01.531) 0:00:09.130 **** 2025-09-27 22:04:55.265508 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:04:55.265519 | orchestrator | 2025-09-27 22:04:55.265530 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-27 22:04:55.265553 | orchestrator | Saturday 27 September 2025 22:02:55 +0000 (0:00:01.715) 0:00:10.845 **** 2025-09-27 22:04:55.265565 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:04:55.265575 | orchestrator | 2025-09-27 22:04:55.265586 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-27 22:04:55.265598 | orchestrator | Saturday 27 September 2025 22:02:56 +0000 (0:00:00.943) 0:00:11.788 **** 2025-09-27 22:04:55.265608 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:04:55.265619 | orchestrator | 2025-09-27 22:04:55.265630 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-27 22:04:55.265641 | orchestrator | Saturday 27 September 2025 22:02:57 +0000 (0:00:00.507) 0:00:12.296 **** 2025-09-27 22:04:55.265659 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:04:55.265671 | orchestrator | 2025-09-27 22:04:55.265681 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-27 22:04:55.265692 | orchestrator | Saturday 27 September 2025 22:02:57 +0000 (0:00:00.315) 0:00:12.611 **** 2025-09-27 22:04:55.265708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 22:04:55.265726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 22:04:55.265749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 22:04:55.265761 | orchestrator | 2025-09-27 22:04:55.265773 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-27 22:04:55.265784 | orchestrator | Saturday 27 September 2025 22:02:58 +0000 (0:00:00.944) 0:00:13.556 **** 2025-09-27 22:04:55.265809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 22:04:55.265824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 22:04:55.265843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 22:04:55.265856 | orchestrator | 2025-09-27 22:04:55.265867 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-27 22:04:55.265878 | orchestrator | Saturday 27 September 2025 22:03:00 +0000 (0:00:02.328) 0:00:15.885 **** 2025-09-27 22:04:55.265889 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-27 22:04:55.265900 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-27 22:04:55.265911 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-27 22:04:55.265921 | orchestrator | 2025-09-27 22:04:55.265932 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-27 22:04:55.265943 | orchestrator | Saturday 27 September 2025 22:03:02 +0000 (0:00:01.693) 0:00:17.578 **** 2025-09-27 22:04:55.265954 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-27 22:04:55.265965 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-27 22:04:55.265976 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-27 22:04:55.265986 | orchestrator | 2025-09-27 22:04:55.265997 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-27 22:04:55.266060 | orchestrator | Saturday 27 September 2025 22:03:04 +0000 (0:00:02.233) 0:00:19.811 **** 2025-09-27 22:04:55.266075 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-27 22:04:55.266086 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-27 22:04:55.266097 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-27 22:04:55.266107 | orchestrator | 2025-09-27 22:04:55.266118 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-27 22:04:55.266129 | orchestrator | Saturday 27 September 2025 22:03:05 +0000 (0:00:01.283) 0:00:21.095 **** 2025-09-27 22:04:55.266140 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-27 22:04:55.266151 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-27 22:04:55.266161 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-27 22:04:55.266172 | orchestrator | 2025-09-27 22:04:55.266183 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-27 22:04:55.266194 | orchestrator | Saturday 27 September 2025 22:03:07 +0000 (0:00:01.771) 0:00:22.866 **** 2025-09-27 22:04:55.266281 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-27 22:04:55.266303 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-27 22:04:55.266381 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-27 22:04:55.266395 | orchestrator | 2025-09-27 22:04:55.266407 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-27 22:04:55.266419 | orchestrator | Saturday 27 September 2025 22:03:09 +0000 (0:00:01.658) 0:00:24.524 **** 2025-09-27 22:04:55.266430 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-27 22:04:55.266441 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-27 22:04:55.266453 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-27 22:04:55.266464 | orchestrator | 2025-09-27 22:04:55.266475 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-27 22:04:55.266486 | orchestrator | Saturday 27 September 2025 22:03:10 +0000 (0:00:01.611) 0:00:26.136 **** 2025-09-27 22:04:55.266498 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:04:55.266509 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:04:55.266520 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:04:55.266531 | orchestrator | 2025-09-27 22:04:55.266542 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-27 22:04:55.266553 | orchestrator | Saturday 27 September 2025 22:03:11 +0000 (0:00:00.414) 0:00:26.551 **** 2025-09-27 22:04:55.266566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 22:04:55.266592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 22:04:55.266611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 22:04:55.266632 | orchestrator | 2025-09-27 22:04:55.266643 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-27 22:04:55.266654 | orchestrator | Saturday 27 September 2025 22:03:12 +0000 (0:00:01.491) 0:00:28.042 **** 2025-09-27 22:04:55.266665 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:04:55.266676 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:04:55.266687 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:04:55.266699 | orchestrator | 2025-09-27 22:04:55.266709 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-27 22:04:55.266720 | orchestrator | Saturday 27 September 2025 22:03:13 +0000 (0:00:00.857) 0:00:28.899 **** 2025-09-27 22:04:55.266732 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:04:55.266743 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:04:55.266753 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:04:55.266764 | orchestrator | 2025-09-27 22:04:55.266775 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-27 22:04:55.266786 | orchestrator | Saturday 27 September 2025 22:03:21 +0000 (0:00:07.843) 0:00:36.743 **** 2025-09-27 22:04:55.266796 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:04:55.266807 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:04:55.266818 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:04:55.266829 | orchestrator | 2025-09-27 22:04:55.266840 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-27 22:04:55.266851 | orchestrator | 2025-09-27 22:04:55.266862 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-27 22:04:55.266873 | orchestrator | Saturday 27 September 2025 22:03:21 +0000 (0:00:00.300) 0:00:37.043 **** 2025-09-27 22:04:55.266883 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:04:55.266895 | orchestrator | 2025-09-27 22:04:55.266906 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-27 22:04:55.266916 | orchestrator | Saturday 27 September 2025 22:03:22 +0000 (0:00:00.560) 0:00:37.604 **** 2025-09-27 22:04:55.266927 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:04:55.266938 | orchestrator | 2025-09-27 22:04:55.266949 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-27 22:04:55.266960 | orchestrator | Saturday 27 September 2025 22:03:22 +0000 (0:00:00.210) 0:00:37.815 **** 2025-09-27 22:04:55.266971 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:04:55.266981 | orchestrator | 2025-09-27 22:04:55.266992 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-27 22:04:55.267003 | orchestrator | Saturday 27 September 2025 22:03:29 +0000 (0:00:06.662) 0:00:44.477 **** 2025-09-27 22:04:55.267014 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:04:55.267025 | orchestrator | 2025-09-27 22:04:55.267036 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-27 22:04:55.267046 | orchestrator | 2025-09-27 22:04:55.267057 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-27 22:04:55.267068 | orchestrator | Saturday 27 September 2025 22:04:18 +0000 (0:00:49.688) 0:01:34.166 **** 2025-09-27 22:04:55.267079 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:04:55.267096 | orchestrator | 2025-09-27 22:04:55.267107 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-27 22:04:55.267118 | orchestrator | Saturday 27 September 2025 22:04:19 +0000 (0:00:00.612) 0:01:34.778 **** 2025-09-27 22:04:55.267128 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:04:55.267139 | orchestrator | 2025-09-27 22:04:55.267150 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-27 22:04:55.267161 | orchestrator | Saturday 27 September 2025 22:04:19 +0000 (0:00:00.280) 0:01:35.059 **** 2025-09-27 22:04:55.267171 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:04:55.267182 | orchestrator | 2025-09-27 22:04:55.267193 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-27 22:04:55.267205 | orchestrator | Saturday 27 September 2025 22:04:21 +0000 (0:00:01.637) 0:01:36.697 **** 2025-09-27 22:04:55.267215 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:04:55.267226 | orchestrator | 2025-09-27 22:04:55.267237 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-27 22:04:55.267247 | orchestrator | 2025-09-27 22:04:55.267258 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-27 22:04:55.267269 | orchestrator | Saturday 27 September 2025 22:04:35 +0000 (0:00:13.770) 0:01:50.468 **** 2025-09-27 22:04:55.267280 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:04:55.267290 | orchestrator | 2025-09-27 22:04:55.267323 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-27 22:04:55.267335 | orchestrator | Saturday 27 September 2025 22:04:35 +0000 (0:00:00.569) 0:01:51.038 **** 2025-09-27 22:04:55.267346 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:04:55.267357 | orchestrator | 2025-09-27 22:04:55.267368 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-27 22:04:55.267379 | orchestrator | Saturday 27 September 2025 22:04:36 +0000 (0:00:00.220) 0:01:51.258 **** 2025-09-27 22:04:55.267389 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:04:55.267400 | orchestrator | 2025-09-27 22:04:55.267416 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-27 22:04:55.267427 | orchestrator | Saturday 27 September 2025 22:04:37 +0000 (0:00:01.347) 0:01:52.606 **** 2025-09-27 22:04:55.267438 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:04:55.267448 | orchestrator | 2025-09-27 22:04:55.267459 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-27 22:04:55.267470 | orchestrator | 2025-09-27 22:04:55.267481 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-27 22:04:55.267491 | orchestrator | Saturday 27 September 2025 22:04:51 +0000 (0:00:14.071) 0:02:06.678 **** 2025-09-27 22:04:55.267502 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:04:55.267513 | orchestrator | 2025-09-27 22:04:55.267524 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-27 22:04:55.267534 | orchestrator | Saturday 27 September 2025 22:04:52 +0000 (0:00:00.585) 0:02:07.263 **** 2025-09-27 22:04:55.267545 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-27 22:04:55.267556 | orchestrator | enable_outward_rabbitmq_True 2025-09-27 22:04:55.267566 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-27 22:04:55.267577 | orchestrator | outward_rabbitmq_restart 2025-09-27 22:04:55.267588 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:04:55.267599 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:04:55.267609 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:04:55.267620 | orchestrator | 2025-09-27 22:04:55.267631 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-27 22:04:55.267642 | orchestrator | skipping: no hosts matched 2025-09-27 22:04:55.267652 | orchestrator | 2025-09-27 22:04:55.267663 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-27 22:04:55.267674 | orchestrator | skipping: no hosts matched 2025-09-27 22:04:55.267685 | orchestrator | 2025-09-27 22:04:55.267696 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-27 22:04:55.267713 | orchestrator | skipping: no hosts matched 2025-09-27 22:04:55.267724 | orchestrator | 2025-09-27 22:04:55.267734 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:04:55.267745 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-27 22:04:55.267757 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-27 22:04:55.267767 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 22:04:55.267778 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 22:04:55.267789 | orchestrator | 2025-09-27 22:04:55.267800 | orchestrator | 2025-09-27 22:04:55.267811 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:04:55.267821 | orchestrator | Saturday 27 September 2025 22:04:54 +0000 (0:00:02.210) 0:02:09.473 **** 2025-09-27 22:04:55.267832 | orchestrator | =============================================================================== 2025-09-27 22:04:55.267843 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 77.53s 2025-09-27 22:04:55.267854 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.65s 2025-09-27 22:04:55.267864 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.84s 2025-09-27 22:04:55.267875 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.32s 2025-09-27 22:04:55.267886 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.33s 2025-09-27 22:04:55.267896 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.23s 2025-09-27 22:04:55.267907 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.21s 2025-09-27 22:04:55.267917 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.77s 2025-09-27 22:04:55.267928 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.74s 2025-09-27 22:04:55.267938 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.72s 2025-09-27 22:04:55.267949 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.69s 2025-09-27 22:04:55.267960 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.66s 2025-09-27 22:04:55.267971 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.61s 2025-09-27 22:04:55.267981 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.53s 2025-09-27 22:04:55.267992 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.49s 2025-09-27 22:04:55.268003 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.28s 2025-09-27 22:04:55.268013 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.94s 2025-09-27 22:04:55.268030 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.94s 2025-09-27 22:04:55.268206 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.94s 2025-09-27 22:04:55.268227 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.86s 2025-09-27 22:04:55.268239 | orchestrator | 2025-09-27 22:04:55 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:04:55.268262 | orchestrator | 2025-09-27 22:04:55 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:04:55.268275 | orchestrator | 2025-09-27 22:04:55 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:04:58.313835 | orchestrator | 2025-09-27 22:04:58 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:04:58.314361 | orchestrator | 2025-09-27 22:04:58 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:04:58.315572 | orchestrator | 2025-09-27 22:04:58 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:04:58.315621 | orchestrator | 2025-09-27 22:04:58 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:05:01.357081 | orchestrator | 2025-09-27 22:05:01 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:05:01.359107 | orchestrator | 2025-09-27 22:05:01 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:05:01.360850 | orchestrator | 2025-09-27 22:05:01 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:05:01.361305 | orchestrator | 2025-09-27 22:05:01 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:05:04.403939 | orchestrator | 2025-09-27 22:05:04 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:05:04.407070 | orchestrator | 2025-09-27 22:05:04 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:05:04.408766 | orchestrator | 2025-09-27 22:05:04 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:05:04.409226 | orchestrator | 2025-09-27 22:05:04 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:05:07.450576 | orchestrator | 2025-09-27 22:05:07 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:05:07.451786 | orchestrator | 2025-09-27 22:05:07 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:05:07.452812 | orchestrator | 2025-09-27 22:05:07 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:05:07.453010 | orchestrator | 2025-09-27 22:05:07 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:05:10.507877 | orchestrator | 2025-09-27 22:05:10 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:05:10.508718 | orchestrator | 2025-09-27 22:05:10 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:05:10.510651 | orchestrator | 2025-09-27 22:05:10 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:05:10.510683 | orchestrator | 2025-09-27 22:05:10 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:05:13.557481 | orchestrator | 2025-09-27 22:05:13 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:05:13.557959 | orchestrator | 2025-09-27 22:05:13 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:05:13.559114 | orchestrator | 2025-09-27 22:05:13 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:05:13.559159 | orchestrator | 2025-09-27 22:05:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:05:16.597231 | orchestrator | 2025-09-27 22:05:16 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:05:16.597594 | orchestrator | 2025-09-27 22:05:16 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:05:16.598508 | orchestrator | 2025-09-27 22:05:16 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:05:16.598561 | orchestrator | 2025-09-27 22:05:16 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:05:19.641562 | orchestrator | 2025-09-27 22:05:19 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:05:19.642508 | orchestrator | 2025-09-27 22:05:19 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:05:19.642597 | orchestrator | 2025-09-27 22:05:19 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:05:19.642717 | orchestrator | 2025-09-27 22:05:19 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:05:22.677963 | orchestrator | 2025-09-27 22:05:22 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:05:22.679614 | orchestrator | 2025-09-27 22:05:22 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:05:22.681351 | orchestrator | 2025-09-27 22:05:22 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:05:22.681370 | orchestrator | 2025-09-27 22:05:22 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:05:25.719421 | orchestrator | 2025-09-27 22:05:25 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:05:25.720384 | orchestrator | 2025-09-27 22:05:25 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:05:25.722189 | orchestrator | 2025-09-27 22:05:25 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:05:25.722231 | orchestrator | 2025-09-27 22:05:25 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:05:28.768740 | orchestrator | 2025-09-27 22:05:28 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:05:28.769867 | orchestrator | 2025-09-27 22:05:28 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:05:28.772442 | orchestrator | 2025-09-27 22:05:28 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:05:28.772483 | orchestrator | 2025-09-27 22:05:28 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:05:31.806511 | orchestrator | 2025-09-27 22:05:31 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:05:31.809562 | orchestrator | 2025-09-27 22:05:31 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:05:31.811345 | orchestrator | 2025-09-27 22:05:31 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:05:31.811383 | orchestrator | 2025-09-27 22:05:31 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:05:34.860033 | orchestrator | 2025-09-27 22:05:34 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:05:34.860150 | orchestrator | 2025-09-27 22:05:34 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:05:34.861113 | orchestrator | 2025-09-27 22:05:34 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:05:34.861146 | orchestrator | 2025-09-27 22:05:34 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:05:37.906715 | orchestrator | 2025-09-27 22:05:37 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:05:37.907516 | orchestrator | 2025-09-27 22:05:37 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:05:37.908470 | orchestrator | 2025-09-27 22:05:37 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:05:37.908501 | orchestrator | 2025-09-27 22:05:37 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:05:40.953638 | orchestrator | 2025-09-27 22:05:40 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:05:40.955545 | orchestrator | 2025-09-27 22:05:40 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:05:40.957230 | orchestrator | 2025-09-27 22:05:40 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:05:40.957426 | orchestrator | 2025-09-27 22:05:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:05:43.999356 | orchestrator | 2025-09-27 22:05:43 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:05:44.002154 | orchestrator | 2025-09-27 22:05:43 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:05:44.004569 | orchestrator | 2025-09-27 22:05:44 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:05:44.004653 | orchestrator | 2025-09-27 22:05:44 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:05:47.054839 | orchestrator | 2025-09-27 22:05:47 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:05:47.056616 | orchestrator | 2025-09-27 22:05:47 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:05:47.058783 | orchestrator | 2025-09-27 22:05:47 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:05:47.059063 | orchestrator | 2025-09-27 22:05:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:05:50.115598 | orchestrator | 2025-09-27 22:05:50 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:05:50.116530 | orchestrator | 2025-09-27 22:05:50 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:05:50.117675 | orchestrator | 2025-09-27 22:05:50 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:05:50.117707 | orchestrator | 2025-09-27 22:05:50 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:05:53.157608 | orchestrator | 2025-09-27 22:05:53 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state STARTED 2025-09-27 22:05:53.157900 | orchestrator | 2025-09-27 22:05:53 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:05:53.158841 | orchestrator | 2025-09-27 22:05:53 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:05:53.158875 | orchestrator | 2025-09-27 22:05:53 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:05:56.184551 | orchestrator | 2025-09-27 22:05:56 | INFO  | Task a9f4ac66-03d5-4079-90ee-56b1d9b9f71d is in state SUCCESS 2025-09-27 22:05:56.185174 | orchestrator | 2025-09-27 22:05:56.185203 | orchestrator | 2025-09-27 22:05:56.185214 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:05:56.185224 | orchestrator | 2025-09-27 22:05:56.185234 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:05:56.185244 | orchestrator | Saturday 27 September 2025 22:03:28 +0000 (0:00:00.208) 0:00:00.208 **** 2025-09-27 22:05:56.185254 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:05:56.185265 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:05:56.185686 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:05:56.185705 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:05:56.185715 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:05:56.185725 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:05:56.185785 | orchestrator | 2025-09-27 22:05:56.185800 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:05:56.185810 | orchestrator | Saturday 27 September 2025 22:03:30 +0000 (0:00:01.478) 0:00:01.686 **** 2025-09-27 22:05:56.185820 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-27 22:05:56.185887 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-27 22:05:56.185900 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-27 22:05:56.185931 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-27 22:05:56.185941 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-27 22:05:56.185951 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-27 22:05:56.185960 | orchestrator | 2025-09-27 22:05:56.185970 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-27 22:05:56.185980 | orchestrator | 2025-09-27 22:05:56.185990 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-27 22:05:56.186000 | orchestrator | Saturday 27 September 2025 22:03:33 +0000 (0:00:02.929) 0:00:04.616 **** 2025-09-27 22:05:56.186010 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:05:56.186068 | orchestrator | 2025-09-27 22:05:56.186078 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-27 22:05:56.186088 | orchestrator | Saturday 27 September 2025 22:03:34 +0000 (0:00:01.675) 0:00:06.291 **** 2025-09-27 22:05:56.186100 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186113 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186123 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186175 | orchestrator | 2025-09-27 22:05:56.186196 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-27 22:05:56.186206 | orchestrator | Saturday 27 September 2025 22:03:36 +0000 (0:00:01.219) 0:00:07.510 **** 2025-09-27 22:05:56.186216 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186254 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186264 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186303 | orchestrator | 2025-09-27 22:05:56.186314 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-27 22:05:56.186324 | orchestrator | Saturday 27 September 2025 22:03:38 +0000 (0:00:02.214) 0:00:09.725 **** 2025-09-27 22:05:56.186334 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186349 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186367 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186414 | orchestrator | 2025-09-27 22:05:56.186424 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-27 22:05:56.186433 | orchestrator | Saturday 27 September 2025 22:03:39 +0000 (0:00:01.284) 0:00:11.009 **** 2025-09-27 22:05:56.186443 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186453 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186463 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186512 | orchestrator | 2025-09-27 22:05:56.186526 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-27 22:05:56.186538 | orchestrator | Saturday 27 September 2025 22:03:41 +0000 (0:00:01.513) 0:00:12.523 **** 2025-09-27 22:05:56.186550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186561 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186572 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186583 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.186617 | orchestrator | 2025-09-27 22:05:56.186628 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-27 22:05:56.186639 | orchestrator | Saturday 27 September 2025 22:03:42 +0000 (0:00:01.281) 0:00:13.804 **** 2025-09-27 22:05:56.186650 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:05:56.186661 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:05:56.186672 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:05:56.186682 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:05:56.186693 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:05:56.186703 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:05:56.186720 | orchestrator | 2025-09-27 22:05:56.186731 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-27 22:05:56.186742 | orchestrator | Saturday 27 September 2025 22:03:45 +0000 (0:00:02.999) 0:00:16.804 **** 2025-09-27 22:05:56.186753 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-27 22:05:56.186769 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-27 22:05:56.186780 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-27 22:05:56.186792 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-27 22:05:56.186802 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-27 22:05:56.186813 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-27 22:05:56.186824 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-27 22:05:56.186836 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-27 22:05:56.186852 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-27 22:05:56.186863 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-27 22:05:56.186875 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-27 22:05:56.186885 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-27 22:05:56.186896 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2025-09-27 22:05:56.186907 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2025-09-27 22:05:56.186916 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2025-09-27 22:05:56.186926 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2025-09-27 22:05:56.186936 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2025-09-27 22:05:56.186945 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2025-09-27 22:05:56.186955 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-27 22:05:56.186965 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-27 22:05:56.186974 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-27 22:05:56.186984 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-27 22:05:56.186994 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-27 22:05:56.187003 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-27 22:05:56.187013 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-27 22:05:56.187023 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-27 22:05:56.187032 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-27 22:05:56.187042 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-27 22:05:56.187056 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-27 22:05:56.187066 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-27 22:05:56.187075 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-27 22:05:56.187085 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-27 22:05:56.187094 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-27 22:05:56.187104 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-27 22:05:56.187113 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-27 22:05:56.187123 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-27 22:05:56.187133 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-27 22:05:56.187143 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-27 22:05:56.187152 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-27 22:05:56.187166 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-27 22:05:56.187176 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-27 22:05:56.187185 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-27 22:05:56.187195 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-27 22:05:56.187205 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-27 22:05:56.187220 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-27 22:05:56.187230 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-27 22:05:56.187239 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-27 22:05:56.187249 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-27 22:05:56.187259 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-27 22:05:56.187268 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-27 22:05:56.187278 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-27 22:05:56.187303 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-27 22:05:56.187313 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-27 22:05:56.187322 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-27 22:05:56.187332 | orchestrator | 2025-09-27 22:05:56.187341 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-27 22:05:56.187351 | orchestrator | Saturday 27 September 2025 22:04:04 +0000 (0:00:19.152) 0:00:35.956 **** 2025-09-27 22:05:56.187367 | orchestrator | 2025-09-27 22:05:56.187376 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-27 22:05:56.187386 | orchestrator | Saturday 27 September 2025 22:04:04 +0000 (0:00:00.235) 0:00:36.191 **** 2025-09-27 22:05:56.187396 | orchestrator | 2025-09-27 22:05:56.187405 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-27 22:05:56.187415 | orchestrator | Saturday 27 September 2025 22:04:04 +0000 (0:00:00.064) 0:00:36.256 **** 2025-09-27 22:05:56.187424 | orchestrator | 2025-09-27 22:05:56.187433 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-27 22:05:56.187443 | orchestrator | Saturday 27 September 2025 22:04:04 +0000 (0:00:00.065) 0:00:36.321 **** 2025-09-27 22:05:56.187452 | orchestrator | 2025-09-27 22:05:56.187462 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-27 22:05:56.187471 | orchestrator | Saturday 27 September 2025 22:04:04 +0000 (0:00:00.130) 0:00:36.452 **** 2025-09-27 22:05:56.187481 | orchestrator | 2025-09-27 22:05:56.187490 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-27 22:05:56.187500 | orchestrator | Saturday 27 September 2025 22:04:05 +0000 (0:00:00.081) 0:00:36.533 **** 2025-09-27 22:05:56.187509 | orchestrator | 2025-09-27 22:05:56.187519 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-27 22:05:56.187528 | orchestrator | Saturday 27 September 2025 22:04:05 +0000 (0:00:00.062) 0:00:36.595 **** 2025-09-27 22:05:56.187538 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:05:56.187548 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:05:56.187557 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:05:56.187566 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:05:56.187576 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:05:56.187585 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:05:56.187595 | orchestrator | 2025-09-27 22:05:56.187605 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-27 22:05:56.187614 | orchestrator | Saturday 27 September 2025 22:04:06 +0000 (0:00:01.720) 0:00:38.316 **** 2025-09-27 22:05:56.187624 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:05:56.187634 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:05:56.187643 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:05:56.187653 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:05:56.187662 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:05:56.187671 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:05:56.187681 | orchestrator | 2025-09-27 22:05:56.187691 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-27 22:05:56.187700 | orchestrator | 2025-09-27 22:05:56.187710 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-27 22:05:56.187719 | orchestrator | Saturday 27 September 2025 22:04:36 +0000 (0:00:29.712) 0:01:08.029 **** 2025-09-27 22:05:56.187729 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:05:56.187738 | orchestrator | 2025-09-27 22:05:56.187748 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-27 22:05:56.187775 | orchestrator | Saturday 27 September 2025 22:04:37 +0000 (0:00:00.588) 0:01:08.617 **** 2025-09-27 22:05:56.187785 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:05:56.187805 | orchestrator | 2025-09-27 22:05:56.187814 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-27 22:05:56.187824 | orchestrator | Saturday 27 September 2025 22:04:37 +0000 (0:00:00.673) 0:01:09.290 **** 2025-09-27 22:05:56.187834 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:05:56.187843 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:05:56.187853 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:05:56.187862 | orchestrator | 2025-09-27 22:05:56.187872 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-27 22:05:56.187888 | orchestrator | Saturday 27 September 2025 22:04:38 +0000 (0:00:01.049) 0:01:10.340 **** 2025-09-27 22:05:56.187897 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:05:56.187907 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:05:56.187921 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:05:56.187931 | orchestrator | 2025-09-27 22:05:56.187941 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-27 22:05:56.187950 | orchestrator | Saturday 27 September 2025 22:04:39 +0000 (0:00:00.473) 0:01:10.813 **** 2025-09-27 22:05:56.187960 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:05:56.187969 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:05:56.187979 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:05:56.187988 | orchestrator | 2025-09-27 22:05:56.187998 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-27 22:05:56.188008 | orchestrator | Saturday 27 September 2025 22:04:39 +0000 (0:00:00.322) 0:01:11.136 **** 2025-09-27 22:05:56.188017 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:05:56.188027 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:05:56.188036 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:05:56.188046 | orchestrator | 2025-09-27 22:05:56.188056 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-27 22:05:56.188065 | orchestrator | Saturday 27 September 2025 22:04:39 +0000 (0:00:00.288) 0:01:11.424 **** 2025-09-27 22:05:56.188075 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:05:56.188084 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:05:56.188094 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:05:56.188103 | orchestrator | 2025-09-27 22:05:56.188113 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-27 22:05:56.188122 | orchestrator | Saturday 27 September 2025 22:04:40 +0000 (0:00:00.433) 0:01:11.858 **** 2025-09-27 22:05:56.188132 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:05:56.188142 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.188163 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.188173 | orchestrator | 2025-09-27 22:05:56.188183 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-27 22:05:56.188192 | orchestrator | Saturday 27 September 2025 22:04:40 +0000 (0:00:00.281) 0:01:12.139 **** 2025-09-27 22:05:56.188202 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:05:56.188212 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.188221 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.188231 | orchestrator | 2025-09-27 22:05:56.188240 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-27 22:05:56.188250 | orchestrator | Saturday 27 September 2025 22:04:40 +0000 (0:00:00.249) 0:01:12.389 **** 2025-09-27 22:05:56.188259 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:05:56.188269 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.188278 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.188332 | orchestrator | 2025-09-27 22:05:56.188342 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-27 22:05:56.188352 | orchestrator | Saturday 27 September 2025 22:04:41 +0000 (0:00:00.308) 0:01:12.697 **** 2025-09-27 22:05:56.188362 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:05:56.188371 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.188381 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.188390 | orchestrator | 2025-09-27 22:05:56.188400 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-27 22:05:56.188409 | orchestrator | Saturday 27 September 2025 22:04:41 +0000 (0:00:00.410) 0:01:13.107 **** 2025-09-27 22:05:56.188419 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:05:56.188429 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.188438 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.188448 | orchestrator | 2025-09-27 22:05:56.188457 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-27 22:05:56.188467 | orchestrator | Saturday 27 September 2025 22:04:41 +0000 (0:00:00.290) 0:01:13.398 **** 2025-09-27 22:05:56.188488 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:05:56.188498 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.188507 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.188517 | orchestrator | 2025-09-27 22:05:56.188527 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-27 22:05:56.188536 | orchestrator | Saturday 27 September 2025 22:04:42 +0000 (0:00:00.277) 0:01:13.676 **** 2025-09-27 22:05:56.188546 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:05:56.188556 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.188565 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.188575 | orchestrator | 2025-09-27 22:05:56.188585 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-27 22:05:56.188594 | orchestrator | Saturday 27 September 2025 22:04:42 +0000 (0:00:00.334) 0:01:14.010 **** 2025-09-27 22:05:56.188604 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:05:56.188613 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.188623 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.188632 | orchestrator | 2025-09-27 22:05:56.188642 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-27 22:05:56.188651 | orchestrator | Saturday 27 September 2025 22:04:42 +0000 (0:00:00.312) 0:01:14.323 **** 2025-09-27 22:05:56.188661 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:05:56.188671 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.188680 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.188690 | orchestrator | 2025-09-27 22:05:56.188699 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-27 22:05:56.188714 | orchestrator | Saturday 27 September 2025 22:04:43 +0000 (0:00:00.518) 0:01:14.841 **** 2025-09-27 22:05:56.188724 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:05:56.188733 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.188743 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.188752 | orchestrator | 2025-09-27 22:05:56.188762 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-27 22:05:56.188772 | orchestrator | Saturday 27 September 2025 22:04:43 +0000 (0:00:00.297) 0:01:15.139 **** 2025-09-27 22:05:56.188781 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:05:56.188791 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.188800 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.188810 | orchestrator | 2025-09-27 22:05:56.188819 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-27 22:05:56.188829 | orchestrator | Saturday 27 September 2025 22:04:43 +0000 (0:00:00.277) 0:01:15.416 **** 2025-09-27 22:05:56.188838 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:05:56.188848 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.188863 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.188873 | orchestrator | 2025-09-27 22:05:56.188882 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-27 22:05:56.188892 | orchestrator | Saturday 27 September 2025 22:04:44 +0000 (0:00:00.273) 0:01:15.690 **** 2025-09-27 22:05:56.188902 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:05:56.188911 | orchestrator | 2025-09-27 22:05:56.188921 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-27 22:05:56.188930 | orchestrator | Saturday 27 September 2025 22:04:44 +0000 (0:00:00.744) 0:01:16.434 **** 2025-09-27 22:05:56.188940 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:05:56.188950 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:05:56.188959 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:05:56.188969 | orchestrator | 2025-09-27 22:05:56.188979 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-27 22:05:56.188988 | orchestrator | Saturday 27 September 2025 22:04:45 +0000 (0:00:00.442) 0:01:16.877 **** 2025-09-27 22:05:56.188998 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:05:56.189008 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:05:56.189023 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:05:56.189032 | orchestrator | 2025-09-27 22:05:56.189042 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-27 22:05:56.189052 | orchestrator | Saturday 27 September 2025 22:04:45 +0000 (0:00:00.424) 0:01:17.302 **** 2025-09-27 22:05:56.189061 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:05:56.189071 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.189080 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.189090 | orchestrator | 2025-09-27 22:05:56.189100 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-27 22:05:56.189109 | orchestrator | Saturday 27 September 2025 22:04:46 +0000 (0:00:00.574) 0:01:17.876 **** 2025-09-27 22:05:56.189119 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:05:56.189128 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.189138 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.189147 | orchestrator | 2025-09-27 22:05:56.189157 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-27 22:05:56.189167 | orchestrator | Saturday 27 September 2025 22:04:46 +0000 (0:00:00.365) 0:01:18.242 **** 2025-09-27 22:05:56.189176 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:05:56.189185 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.189195 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.189204 | orchestrator | 2025-09-27 22:05:56.189214 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-27 22:05:56.189223 | orchestrator | Saturday 27 September 2025 22:04:47 +0000 (0:00:00.351) 0:01:18.593 **** 2025-09-27 22:05:56.189233 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:05:56.189242 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.189252 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.189261 | orchestrator | 2025-09-27 22:05:56.189271 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-27 22:05:56.189293 | orchestrator | Saturday 27 September 2025 22:04:47 +0000 (0:00:00.446) 0:01:19.040 **** 2025-09-27 22:05:56.189303 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:05:56.189312 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.189322 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.189332 | orchestrator | 2025-09-27 22:05:56.189342 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-27 22:05:56.189351 | orchestrator | Saturday 27 September 2025 22:04:48 +0000 (0:00:00.595) 0:01:19.636 **** 2025-09-27 22:05:56.189361 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:05:56.189371 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.189380 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.189390 | orchestrator | 2025-09-27 22:05:56.189400 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-27 22:05:56.189409 | orchestrator | Saturday 27 September 2025 22:04:48 +0000 (0:00:00.403) 0:01:20.039 **** 2025-09-27 22:05:56.189420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189540 | orchestrator | 2025-09-27 22:05:56.189550 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-27 22:05:56.189559 | orchestrator | Saturday 27 September 2025 22:04:49 +0000 (0:00:01.356) 0:01:21.395 **** 2025-09-27 22:05:56.189569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189699 | orchestrator | 2025-09-27 22:05:56.189709 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-27 22:05:56.189719 | orchestrator | Saturday 27 September 2025 22:04:54 +0000 (0:00:04.218) 0:01:25.614 **** 2025-09-27 22:05:56.189729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.189833 | orchestrator | 2025-09-27 22:05:56.189842 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-27 22:05:56.189852 | orchestrator | Saturday 27 September 2025 22:04:56 +0000 (0:00:01.910) 0:01:27.525 **** 2025-09-27 22:05:56.189862 | orchestrator | 2025-09-27 22:05:56.189871 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-27 22:05:56.189881 | orchestrator | Saturday 27 September 2025 22:04:56 +0000 (0:00:00.271) 0:01:27.796 **** 2025-09-27 22:05:56.189890 | orchestrator | 2025-09-27 22:05:56.189900 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-27 22:05:56.189909 | orchestrator | Saturday 27 September 2025 22:04:56 +0000 (0:00:00.082) 0:01:27.879 **** 2025-09-27 22:05:56.189919 | orchestrator | 2025-09-27 22:05:56.189928 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-27 22:05:56.189938 | orchestrator | Saturday 27 September 2025 22:04:56 +0000 (0:00:00.069) 0:01:27.948 **** 2025-09-27 22:05:56.189948 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:05:56.189957 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:05:56.189967 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:05:56.189976 | orchestrator | 2025-09-27 22:05:56.189986 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-27 22:05:56.189995 | orchestrator | Saturday 27 September 2025 22:05:02 +0000 (0:00:06.483) 0:01:34.431 **** 2025-09-27 22:05:56.190009 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:05:56.190072 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:05:56.190096 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:05:56.190118 | orchestrator | 2025-09-27 22:05:56.190135 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-27 22:05:56.190151 | orchestrator | Saturday 27 September 2025 22:05:09 +0000 (0:00:06.662) 0:01:41.094 **** 2025-09-27 22:05:56.190168 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:05:56.190186 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:05:56.190203 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:05:56.190213 | orchestrator | 2025-09-27 22:05:56.190223 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-27 22:05:56.190233 | orchestrator | Saturday 27 September 2025 22:05:16 +0000 (0:00:06.933) 0:01:48.027 **** 2025-09-27 22:05:56.190242 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:05:56.190252 | orchestrator | 2025-09-27 22:05:56.190261 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-27 22:05:56.190271 | orchestrator | Saturday 27 September 2025 22:05:16 +0000 (0:00:00.119) 0:01:48.147 **** 2025-09-27 22:05:56.190280 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:05:56.190336 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:05:56.190345 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:05:56.190355 | orchestrator | 2025-09-27 22:05:56.190365 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-27 22:05:56.190374 | orchestrator | Saturday 27 September 2025 22:05:17 +0000 (0:00:01.005) 0:01:49.153 **** 2025-09-27 22:05:56.190384 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.190394 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.190403 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:05:56.190413 | orchestrator | 2025-09-27 22:05:56.190428 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-27 22:05:56.190438 | orchestrator | Saturday 27 September 2025 22:05:18 +0000 (0:00:00.651) 0:01:49.804 **** 2025-09-27 22:05:56.190448 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:05:56.190458 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:05:56.190467 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:05:56.190477 | orchestrator | 2025-09-27 22:05:56.190486 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-27 22:05:56.190496 | orchestrator | Saturday 27 September 2025 22:05:18 +0000 (0:00:00.678) 0:01:50.483 **** 2025-09-27 22:05:56.190506 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.190515 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.190525 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:05:56.190535 | orchestrator | 2025-09-27 22:05:56.190544 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-27 22:05:56.190554 | orchestrator | Saturday 27 September 2025 22:05:19 +0000 (0:00:00.640) 0:01:51.123 **** 2025-09-27 22:05:56.190564 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:05:56.190582 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:05:56.190592 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:05:56.190602 | orchestrator | 2025-09-27 22:05:56.190611 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-27 22:05:56.190621 | orchestrator | Saturday 27 September 2025 22:05:20 +0000 (0:00:01.197) 0:01:52.321 **** 2025-09-27 22:05:56.190630 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:05:56.190640 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:05:56.190649 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:05:56.190659 | orchestrator | 2025-09-27 22:05:56.190668 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-27 22:05:56.190678 | orchestrator | Saturday 27 September 2025 22:05:21 +0000 (0:00:00.690) 0:01:53.012 **** 2025-09-27 22:05:56.190688 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:05:56.190697 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:05:56.190707 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:05:56.190716 | orchestrator | 2025-09-27 22:05:56.190726 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-27 22:05:56.190755 | orchestrator | Saturday 27 September 2025 22:05:21 +0000 (0:00:00.316) 0:01:53.328 **** 2025-09-27 22:05:56.190766 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.190777 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.190787 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.190797 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.190807 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.190818 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.190832 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.190842 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.190858 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.190868 | orchestrator | 2025-09-27 22:05:56.190878 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-27 22:05:56.190888 | orchestrator | Saturday 27 September 2025 22:05:23 +0000 (0:00:01.313) 0:01:54.641 **** 2025-09-27 22:05:56.190902 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.190911 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.190919 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.190927 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.190935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.190944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.190952 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.190960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.190972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.190980 | orchestrator | 2025-09-27 22:05:56.190989 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-27 22:05:56.190997 | orchestrator | Saturday 27 September 2025 22:05:28 +0000 (0:00:04.995) 0:01:59.637 **** 2025-09-27 22:05:56.191017 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.191026 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.191034 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.191042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.191050 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.191059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.191067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.191075 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.191083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:05:56.191091 | orchestrator | 2025-09-27 22:05:56.191103 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-27 22:05:56.191111 | orchestrator | Saturday 27 September 2025 22:05:30 +0000 (0:00:02.514) 0:02:02.151 **** 2025-09-27 22:05:56.191119 | orchestrator | 2025-09-27 22:05:56.191127 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-27 22:05:56.191140 | orchestrator | Saturday 27 September 2025 22:05:30 +0000 (0:00:00.065) 0:02:02.217 **** 2025-09-27 22:05:56.191148 | orchestrator | 2025-09-27 22:05:56.191156 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-27 22:05:56.191164 | orchestrator | Saturday 27 September 2025 22:05:30 +0000 (0:00:00.065) 0:02:02.282 **** 2025-09-27 22:05:56.191172 | orchestrator | 2025-09-27 22:05:56.191180 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-27 22:05:56.191188 | orchestrator | Saturday 27 September 2025 22:05:30 +0000 (0:00:00.070) 0:02:02.353 **** 2025-09-27 22:05:56.191195 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:05:56.191203 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:05:56.191211 | orchestrator | 2025-09-27 22:05:56.191223 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-27 22:05:56.191231 | orchestrator | Saturday 27 September 2025 22:05:37 +0000 (0:00:06.246) 0:02:08.599 **** 2025-09-27 22:05:56.191239 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:05:56.191247 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:05:56.191255 | orchestrator | 2025-09-27 22:05:56.191263 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-27 22:05:56.191270 | orchestrator | Saturday 27 September 2025 22:05:43 +0000 (0:00:06.176) 0:02:14.775 **** 2025-09-27 22:05:56.191278 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:05:56.191298 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:05:56.191306 | orchestrator | 2025-09-27 22:05:56.191314 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-27 22:05:56.191322 | orchestrator | Saturday 27 September 2025 22:05:50 +0000 (0:00:07.169) 0:02:21.945 **** 2025-09-27 22:05:56.191329 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:05:56.191337 | orchestrator | 2025-09-27 22:05:56.191345 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-27 22:05:56.191353 | orchestrator | Saturday 27 September 2025 22:05:50 +0000 (0:00:00.209) 0:02:22.155 **** 2025-09-27 22:05:56.191369 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:05:56.191377 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:05:56.191385 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:05:56.191393 | orchestrator | 2025-09-27 22:05:56.191401 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-27 22:05:56.191408 | orchestrator | Saturday 27 September 2025 22:05:51 +0000 (0:00:00.898) 0:02:23.053 **** 2025-09-27 22:05:56.191416 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.191424 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.191432 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:05:56.191440 | orchestrator | 2025-09-27 22:05:56.191447 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-27 22:05:56.191455 | orchestrator | Saturday 27 September 2025 22:05:52 +0000 (0:00:00.718) 0:02:23.772 **** 2025-09-27 22:05:56.191463 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:05:56.191471 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:05:56.191479 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:05:56.191486 | orchestrator | 2025-09-27 22:05:56.191494 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-27 22:05:56.191502 | orchestrator | Saturday 27 September 2025 22:05:53 +0000 (0:00:00.754) 0:02:24.526 **** 2025-09-27 22:05:56.191510 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:05:56.191518 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:05:56.191525 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:05:56.191533 | orchestrator | 2025-09-27 22:05:56.191541 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-27 22:05:56.191548 | orchestrator | Saturday 27 September 2025 22:05:53 +0000 (0:00:00.677) 0:02:25.204 **** 2025-09-27 22:05:56.191556 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:05:56.191564 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:05:56.191572 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:05:56.191585 | orchestrator | 2025-09-27 22:05:56.191593 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-27 22:05:56.191600 | orchestrator | Saturday 27 September 2025 22:05:54 +0000 (0:00:00.733) 0:02:25.938 **** 2025-09-27 22:05:56.191608 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:05:56.191616 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:05:56.191623 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:05:56.191631 | orchestrator | 2025-09-27 22:05:56.191639 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:05:56.191647 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-27 22:05:56.191655 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-27 22:05:56.191663 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-27 22:05:56.191671 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:05:56.191679 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:05:56.191687 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:05:56.191695 | orchestrator | 2025-09-27 22:05:56.191703 | orchestrator | 2025-09-27 22:05:56.191711 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:05:56.191722 | orchestrator | Saturday 27 September 2025 22:05:55 +0000 (0:00:01.235) 0:02:27.173 **** 2025-09-27 22:05:56.191730 | orchestrator | =============================================================================== 2025-09-27 22:05:56.191738 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 29.71s 2025-09-27 22:05:56.191745 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.15s 2025-09-27 22:05:56.191753 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.10s 2025-09-27 22:05:56.191761 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 12.84s 2025-09-27 22:05:56.191769 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 12.73s 2025-09-27 22:05:56.191777 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.00s 2025-09-27 22:05:56.191784 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.22s 2025-09-27 22:05:56.191796 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.00s 2025-09-27 22:05:56.191804 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.93s 2025-09-27 22:05:56.191812 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.51s 2025-09-27 22:05:56.191820 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.21s 2025-09-27 22:05:56.191828 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 1.91s 2025-09-27 22:05:56.191836 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.72s 2025-09-27 22:05:56.191843 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.68s 2025-09-27 22:05:56.191851 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.51s 2025-09-27 22:05:56.191859 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.48s 2025-09-27 22:05:56.191867 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.36s 2025-09-27 22:05:56.191874 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.31s 2025-09-27 22:05:56.191882 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.28s 2025-09-27 22:05:56.191895 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.28s 2025-09-27 22:05:56.191903 | orchestrator | 2025-09-27 22:05:56 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:05:56.191911 | orchestrator | 2025-09-27 22:05:56 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:05:56.191919 | orchestrator | 2025-09-27 22:05:56 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:05:59.215983 | orchestrator | 2025-09-27 22:05:59 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:05:59.217356 | orchestrator | 2025-09-27 22:05:59 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:05:59.217379 | orchestrator | 2025-09-27 22:05:59 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:06:02.257943 | orchestrator | 2025-09-27 22:06:02 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:06:02.258008 | orchestrator | 2025-09-27 22:06:02 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:06:02.258064 | orchestrator | 2025-09-27 22:06:02 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:06:05.289761 | orchestrator | 2025-09-27 22:06:05 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:06:05.292641 | orchestrator | 2025-09-27 22:06:05 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:06:05.292687 | orchestrator | 2025-09-27 22:06:05 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:06:08.340022 | orchestrator | 2025-09-27 22:06:08 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:06:08.340853 | orchestrator | 2025-09-27 22:06:08 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:06:08.341401 | orchestrator | 2025-09-27 22:06:08 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:06:11.399071 | orchestrator | 2025-09-27 22:06:11 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:06:11.399698 | orchestrator | 2025-09-27 22:06:11 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:06:11.399744 | orchestrator | 2025-09-27 22:06:11 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:06:14.445092 | orchestrator | 2025-09-27 22:06:14 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:06:14.445587 | orchestrator | 2025-09-27 22:06:14 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:06:14.445619 | orchestrator | 2025-09-27 22:06:14 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:06:17.478133 | orchestrator | 2025-09-27 22:06:17 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:06:17.479469 | orchestrator | 2025-09-27 22:06:17 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:06:17.479958 | orchestrator | 2025-09-27 22:06:17 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:06:20.519345 | orchestrator | 2025-09-27 22:06:20 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:06:20.520192 | orchestrator | 2025-09-27 22:06:20 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:06:20.520229 | orchestrator | 2025-09-27 22:06:20 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:06:23.582217 | orchestrator | 2025-09-27 22:06:23 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:06:23.583550 | orchestrator | 2025-09-27 22:06:23 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:06:23.583734 | orchestrator | 2025-09-27 22:06:23 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:06:26.637053 | orchestrator | 2025-09-27 22:06:26 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:06:26.637917 | orchestrator | 2025-09-27 22:06:26 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:06:26.637959 | orchestrator | 2025-09-27 22:06:26 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:06:29.677566 | orchestrator | 2025-09-27 22:06:29 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:06:29.678339 | orchestrator | 2025-09-27 22:06:29 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:06:29.678370 | orchestrator | 2025-09-27 22:06:29 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:06:32.727489 | orchestrator | 2025-09-27 22:06:32 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:06:32.731671 | orchestrator | 2025-09-27 22:06:32 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:06:32.731725 | orchestrator | 2025-09-27 22:06:32 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:06:35.767472 | orchestrator | 2025-09-27 22:06:35 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:06:35.768844 | orchestrator | 2025-09-27 22:06:35 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:06:35.769601 | orchestrator | 2025-09-27 22:06:35 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:06:38.817664 | orchestrator | 2025-09-27 22:06:38 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:06:38.818230 | orchestrator | 2025-09-27 22:06:38 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:06:38.818436 | orchestrator | 2025-09-27 22:06:38 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:06:41.863808 | orchestrator | 2025-09-27 22:06:41 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:06:41.865855 | orchestrator | 2025-09-27 22:06:41 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:06:41.866209 | orchestrator | 2025-09-27 22:06:41 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:06:44.912953 | orchestrator | 2025-09-27 22:06:44 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:06:44.916902 | orchestrator | 2025-09-27 22:06:44 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:06:44.916966 | orchestrator | 2025-09-27 22:06:44 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:06:47.961822 | orchestrator | 2025-09-27 22:06:47 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:06:47.963066 | orchestrator | 2025-09-27 22:06:47 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:06:47.963565 | orchestrator | 2025-09-27 22:06:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:06:51.006211 | orchestrator | 2025-09-27 22:06:51 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:06:51.007151 | orchestrator | 2025-09-27 22:06:51 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:06:51.007350 | orchestrator | 2025-09-27 22:06:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:06:54.057580 | orchestrator | 2025-09-27 22:06:54 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:06:54.058525 | orchestrator | 2025-09-27 22:06:54 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:06:54.058703 | orchestrator | 2025-09-27 22:06:54 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:06:57.101124 | orchestrator | 2025-09-27 22:06:57 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:06:57.102636 | orchestrator | 2025-09-27 22:06:57 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:06:57.102684 | orchestrator | 2025-09-27 22:06:57 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:07:00.154140 | orchestrator | 2025-09-27 22:07:00 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:07:00.155306 | orchestrator | 2025-09-27 22:07:00 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:07:00.155411 | orchestrator | 2025-09-27 22:07:00 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:07:03.203458 | orchestrator | 2025-09-27 22:07:03 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:07:03.204906 | orchestrator | 2025-09-27 22:07:03 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:07:03.204939 | orchestrator | 2025-09-27 22:07:03 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:07:06.247027 | orchestrator | 2025-09-27 22:07:06 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:07:06.249195 | orchestrator | 2025-09-27 22:07:06 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:07:06.249227 | orchestrator | 2025-09-27 22:07:06 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:07:09.291471 | orchestrator | 2025-09-27 22:07:09 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:07:09.293532 | orchestrator | 2025-09-27 22:07:09 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:07:09.293581 | orchestrator | 2025-09-27 22:07:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:07:12.329917 | orchestrator | 2025-09-27 22:07:12 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:07:12.331916 | orchestrator | 2025-09-27 22:07:12 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:07:12.331954 | orchestrator | 2025-09-27 22:07:12 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:07:15.372587 | orchestrator | 2025-09-27 22:07:15 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:07:15.376159 | orchestrator | 2025-09-27 22:07:15 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:07:15.376225 | orchestrator | 2025-09-27 22:07:15 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:07:18.413164 | orchestrator | 2025-09-27 22:07:18 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:07:18.413862 | orchestrator | 2025-09-27 22:07:18 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:07:18.413898 | orchestrator | 2025-09-27 22:07:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:07:21.457567 | orchestrator | 2025-09-27 22:07:21 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:07:21.459502 | orchestrator | 2025-09-27 22:07:21 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:07:21.459823 | orchestrator | 2025-09-27 22:07:21 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:07:24.510499 | orchestrator | 2025-09-27 22:07:24 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:07:24.514745 | orchestrator | 2025-09-27 22:07:24 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:07:24.515160 | orchestrator | 2025-09-27 22:07:24 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:07:27.554542 | orchestrator | 2025-09-27 22:07:27 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:07:27.554844 | orchestrator | 2025-09-27 22:07:27 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:07:27.555001 | orchestrator | 2025-09-27 22:07:27 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:07:30.598836 | orchestrator | 2025-09-27 22:07:30 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:07:30.600803 | orchestrator | 2025-09-27 22:07:30 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:07:30.600896 | orchestrator | 2025-09-27 22:07:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:07:33.631820 | orchestrator | 2025-09-27 22:07:33 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:07:33.633187 | orchestrator | 2025-09-27 22:07:33 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:07:33.633224 | orchestrator | 2025-09-27 22:07:33 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:07:36.678399 | orchestrator | 2025-09-27 22:07:36 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:07:36.679790 | orchestrator | 2025-09-27 22:07:36 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:07:36.679828 | orchestrator | 2025-09-27 22:07:36 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:07:39.731909 | orchestrator | 2025-09-27 22:07:39 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:07:39.732274 | orchestrator | 2025-09-27 22:07:39 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:07:39.732305 | orchestrator | 2025-09-27 22:07:39 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:07:42.763965 | orchestrator | 2025-09-27 22:07:42 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:07:42.764197 | orchestrator | 2025-09-27 22:07:42 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:07:42.764283 | orchestrator | 2025-09-27 22:07:42 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:07:45.796989 | orchestrator | 2025-09-27 22:07:45 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:07:45.797135 | orchestrator | 2025-09-27 22:07:45 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:07:45.797152 | orchestrator | 2025-09-27 22:07:45 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:07:48.840040 | orchestrator | 2025-09-27 22:07:48 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:07:48.840164 | orchestrator | 2025-09-27 22:07:48 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:07:48.840194 | orchestrator | 2025-09-27 22:07:48 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:07:51.894850 | orchestrator | 2025-09-27 22:07:51 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:07:51.901048 | orchestrator | 2025-09-27 22:07:51 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:07:51.901160 | orchestrator | 2025-09-27 22:07:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:07:54.941563 | orchestrator | 2025-09-27 22:07:54 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:07:54.945646 | orchestrator | 2025-09-27 22:07:54 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:07:54.945707 | orchestrator | 2025-09-27 22:07:54 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:07:57.995256 | orchestrator | 2025-09-27 22:07:57 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:07:57.996714 | orchestrator | 2025-09-27 22:07:57 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:07:57.996756 | orchestrator | 2025-09-27 22:07:57 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:08:01.040135 | orchestrator | 2025-09-27 22:08:01 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:08:01.041248 | orchestrator | 2025-09-27 22:08:01 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:08:01.041276 | orchestrator | 2025-09-27 22:08:01 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:08:04.088871 | orchestrator | 2025-09-27 22:08:04 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:08:04.091021 | orchestrator | 2025-09-27 22:08:04 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:08:04.091091 | orchestrator | 2025-09-27 22:08:04 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:08:07.131643 | orchestrator | 2025-09-27 22:08:07 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:08:07.133173 | orchestrator | 2025-09-27 22:08:07 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:08:07.133438 | orchestrator | 2025-09-27 22:08:07 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:08:10.181867 | orchestrator | 2025-09-27 22:08:10 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:08:10.188962 | orchestrator | 2025-09-27 22:08:10 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:08:10.189945 | orchestrator | 2025-09-27 22:08:10 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:08:13.235996 | orchestrator | 2025-09-27 22:08:13 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:08:13.237829 | orchestrator | 2025-09-27 22:08:13 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:08:13.237877 | orchestrator | 2025-09-27 22:08:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:08:16.281949 | orchestrator | 2025-09-27 22:08:16 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:08:16.282717 | orchestrator | 2025-09-27 22:08:16 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:08:16.283351 | orchestrator | 2025-09-27 22:08:16 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:08:19.328852 | orchestrator | 2025-09-27 22:08:19 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:08:19.328948 | orchestrator | 2025-09-27 22:08:19 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:08:19.328960 | orchestrator | 2025-09-27 22:08:19 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:08:22.369949 | orchestrator | 2025-09-27 22:08:22 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:08:22.372932 | orchestrator | 2025-09-27 22:08:22 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:08:22.372984 | orchestrator | 2025-09-27 22:08:22 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:08:25.404474 | orchestrator | 2025-09-27 22:08:25 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:08:25.405150 | orchestrator | 2025-09-27 22:08:25 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:08:25.405197 | orchestrator | 2025-09-27 22:08:25 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:08:28.450596 | orchestrator | 2025-09-27 22:08:28 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:08:28.453423 | orchestrator | 2025-09-27 22:08:28 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:08:28.453497 | orchestrator | 2025-09-27 22:08:28 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:08:31.498964 | orchestrator | 2025-09-27 22:08:31 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:08:31.500937 | orchestrator | 2025-09-27 22:08:31 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:08:31.501117 | orchestrator | 2025-09-27 22:08:31 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:08:34.546916 | orchestrator | 2025-09-27 22:08:34 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:08:34.547865 | orchestrator | 2025-09-27 22:08:34 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:08:34.548226 | orchestrator | 2025-09-27 22:08:34 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:08:37.595506 | orchestrator | 2025-09-27 22:08:37 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:08:37.596841 | orchestrator | 2025-09-27 22:08:37 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state STARTED 2025-09-27 22:08:37.596885 | orchestrator | 2025-09-27 22:08:37 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:08:40.644732 | orchestrator | 2025-09-27 22:08:40 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:08:40.646285 | orchestrator | 2025-09-27 22:08:40 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:08:40.648268 | orchestrator | 2025-09-27 22:08:40 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:08:40.660949 | orchestrator | 2025-09-27 22:08:40 | INFO  | Task 04f22026-8acb-4ad6-9ac5-96cb688da1a3 is in state SUCCESS 2025-09-27 22:08:40.663651 | orchestrator | 2025-09-27 22:08:40.663733 | orchestrator | 2025-09-27 22:08:40.663747 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:08:40.663759 | orchestrator | 2025-09-27 22:08:40.663787 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:08:40.663800 | orchestrator | Saturday 27 September 2025 22:02:24 +0000 (0:00:00.760) 0:00:00.760 **** 2025-09-27 22:08:40.663810 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:08:40.663821 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:08:40.663831 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:08:40.663841 | orchestrator | 2025-09-27 22:08:40.663850 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:08:40.663860 | orchestrator | Saturday 27 September 2025 22:02:25 +0000 (0:00:00.653) 0:00:01.413 **** 2025-09-27 22:08:40.663870 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-27 22:08:40.663880 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-27 22:08:40.663911 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-27 22:08:40.663921 | orchestrator | 2025-09-27 22:08:40.663931 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-27 22:08:40.663940 | orchestrator | 2025-09-27 22:08:40.663950 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-27 22:08:40.663959 | orchestrator | Saturday 27 September 2025 22:02:26 +0000 (0:00:00.834) 0:00:02.247 **** 2025-09-27 22:08:40.663969 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.663979 | orchestrator | 2025-09-27 22:08:40.663989 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-27 22:08:40.663998 | orchestrator | Saturday 27 September 2025 22:02:27 +0000 (0:00:01.235) 0:00:03.482 **** 2025-09-27 22:08:40.664008 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:08:40.664018 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:08:40.664027 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:08:40.664037 | orchestrator | 2025-09-27 22:08:40.664047 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-27 22:08:40.664056 | orchestrator | Saturday 27 September 2025 22:02:28 +0000 (0:00:00.732) 0:00:04.215 **** 2025-09-27 22:08:40.664066 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.664075 | orchestrator | 2025-09-27 22:08:40.664084 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-27 22:08:40.664094 | orchestrator | Saturday 27 September 2025 22:02:29 +0000 (0:00:00.965) 0:00:05.180 **** 2025-09-27 22:08:40.664222 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:08:40.664237 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:08:40.664249 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:08:40.664260 | orchestrator | 2025-09-27 22:08:40.664271 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-27 22:08:40.664282 | orchestrator | Saturday 27 September 2025 22:02:30 +0000 (0:00:00.938) 0:00:06.118 **** 2025-09-27 22:08:40.664293 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-27 22:08:40.664304 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-27 22:08:40.664315 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-27 22:08:40.664325 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-27 22:08:40.664336 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-27 22:08:40.664347 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-27 22:08:40.664358 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-27 22:08:40.664370 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-27 22:08:40.664381 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-27 22:08:40.664392 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-27 22:08:40.664403 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-27 22:08:40.664413 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-27 22:08:40.664422 | orchestrator | 2025-09-27 22:08:40.664432 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-27 22:08:40.664442 | orchestrator | Saturday 27 September 2025 22:02:32 +0000 (0:00:02.406) 0:00:08.525 **** 2025-09-27 22:08:40.664451 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-27 22:08:40.664462 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-27 22:08:40.664550 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-27 22:08:40.664562 | orchestrator | 2025-09-27 22:08:40.664572 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-27 22:08:40.664582 | orchestrator | Saturday 27 September 2025 22:02:33 +0000 (0:00:00.772) 0:00:09.297 **** 2025-09-27 22:08:40.664591 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-27 22:08:40.664601 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-27 22:08:40.664611 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-27 22:08:40.664620 | orchestrator | 2025-09-27 22:08:40.664630 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-27 22:08:40.664639 | orchestrator | Saturday 27 September 2025 22:02:34 +0000 (0:00:01.248) 0:00:10.546 **** 2025-09-27 22:08:40.664649 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-27 22:08:40.664659 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.664685 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-27 22:08:40.664696 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.664706 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-27 22:08:40.664722 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.664732 | orchestrator | 2025-09-27 22:08:40.664742 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-27 22:08:40.664751 | orchestrator | Saturday 27 September 2025 22:02:35 +0000 (0:00:00.590) 0:00:11.136 **** 2025-09-27 22:08:40.664765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-27 22:08:40.664780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-27 22:08:40.664791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-27 22:08:40.664801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 22:08:40.664818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 22:08:40.664834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 22:08:40.664850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 22:08:40.664861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 22:08:40.664871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 22:08:40.664881 | orchestrator | 2025-09-27 22:08:40.664891 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-27 22:08:40.664901 | orchestrator | Saturday 27 September 2025 22:02:37 +0000 (0:00:01.877) 0:00:13.014 **** 2025-09-27 22:08:40.664911 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.664920 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.664930 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.664940 | orchestrator | 2025-09-27 22:08:40.664950 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-27 22:08:40.664959 | orchestrator | Saturday 27 September 2025 22:02:38 +0000 (0:00:01.332) 0:00:14.346 **** 2025-09-27 22:08:40.664969 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-27 22:08:40.664979 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-27 22:08:40.664989 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-27 22:08:40.664998 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-27 22:08:40.665008 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-27 22:08:40.665023 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-27 22:08:40.665033 | orchestrator | 2025-09-27 22:08:40.665043 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-27 22:08:40.665096 | orchestrator | Saturday 27 September 2025 22:02:41 +0000 (0:00:02.593) 0:00:16.940 **** 2025-09-27 22:08:40.665106 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.665116 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.665125 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.665135 | orchestrator | 2025-09-27 22:08:40.665145 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-27 22:08:40.665154 | orchestrator | Saturday 27 September 2025 22:02:41 +0000 (0:00:00.956) 0:00:17.896 **** 2025-09-27 22:08:40.665164 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:08:40.665174 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:08:40.665183 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:08:40.665223 | orchestrator | 2025-09-27 22:08:40.665233 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-27 22:08:40.665243 | orchestrator | Saturday 27 September 2025 22:02:44 +0000 (0:00:02.113) 0:00:20.010 **** 2025-09-27 22:08:40.665253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.665278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.665290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.665302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__92ccd44a8316d834cda1c0aa4aee55052b0118a3', '__omit_place_holder__92ccd44a8316d834cda1c0aa4aee55052b0118a3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-27 22:08:40.665312 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.665323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.665341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.665351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.665362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__92ccd44a8316d834cda1c0aa4aee55052b0118a3', '__omit_place_holder__92ccd44a8316d834cda1c0aa4aee55052b0118a3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-27 22:08:40.665372 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.665394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.665405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.665520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.665540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__92ccd44a8316d834cda1c0aa4aee55052b0118a3', '__omit_place_holder__92ccd44a8316d834cda1c0aa4aee55052b0118a3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-27 22:08:40.665550 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.665560 | orchestrator | 2025-09-27 22:08:40.665570 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-27 22:08:40.665580 | orchestrator | Saturday 27 September 2025 22:02:45 +0000 (0:00:01.203) 0:00:21.213 **** 2025-09-27 22:08:40.665590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-27 22:08:40.665600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-27 22:08:40.665618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-27 22:08:40.665629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 22:08:40.665639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 22:08:40.665655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.665694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.665706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__92ccd44a8316d834cda1c0aa4aee55052b0118a3', '__omit_place_holder__92ccd44a8316d834cda1c0aa4aee55052b0118a3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-27 22:08:40.665716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__92ccd44a8316d834cda1c0aa4aee55052b0118a3', '__omit_place_holder__92ccd44a8316d834cda1c0aa4aee55052b0118a3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-27 22:08:40.665738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 22:08:40.665749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.665776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__92ccd44a8316d834cda1c0aa4aee55052b0118a3', '__omit_place_holder__92ccd44a8316d834cda1c0aa4aee55052b0118a3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-27 22:08:40.665787 | orchestrator | 2025-09-27 22:08:40.665796 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-27 22:08:40.665806 | orchestrator | Saturday 27 September 2025 22:02:48 +0000 (0:00:02.915) 0:00:24.129 **** 2025-09-27 22:08:40.665861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-27 22:08:40.665873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-27 22:08:40.665883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-27 22:08:40.665907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 22:08:40.665919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 22:08:40.665936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 22:08:40.665946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 22:08:40.665956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 22:08:40.665966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 22:08:40.665976 | orchestrator | 2025-09-27 22:08:40.665986 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-27 22:08:40.665996 | orchestrator | Saturday 27 September 2025 22:02:51 +0000 (0:00:03.052) 0:00:27.181 **** 2025-09-27 22:08:40.666006 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-27 22:08:40.666087 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-27 22:08:40.666101 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-27 22:08:40.666111 | orchestrator | 2025-09-27 22:08:40.666121 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-27 22:08:40.666131 | orchestrator | Saturday 27 September 2025 22:02:54 +0000 (0:00:03.356) 0:00:30.538 **** 2025-09-27 22:08:40.666140 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-27 22:08:40.666150 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-27 22:08:40.666160 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-27 22:08:40.666170 | orchestrator | 2025-09-27 22:08:40.667314 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-27 22:08:40.667402 | orchestrator | Saturday 27 September 2025 22:02:58 +0000 (0:00:03.452) 0:00:33.991 **** 2025-09-27 22:08:40.667445 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.667459 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.667469 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.667480 | orchestrator | 2025-09-27 22:08:40.667492 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-27 22:08:40.667503 | orchestrator | Saturday 27 September 2025 22:02:58 +0000 (0:00:00.817) 0:00:34.808 **** 2025-09-27 22:08:40.667514 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-27 22:08:40.667528 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-27 22:08:40.667539 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-27 22:08:40.667550 | orchestrator | 2025-09-27 22:08:40.667561 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-27 22:08:40.667571 | orchestrator | Saturday 27 September 2025 22:03:02 +0000 (0:00:03.693) 0:00:38.502 **** 2025-09-27 22:08:40.667583 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-27 22:08:40.667594 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-27 22:08:40.667604 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-27 22:08:40.667615 | orchestrator | 2025-09-27 22:08:40.667626 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-27 22:08:40.667637 | orchestrator | Saturday 27 September 2025 22:03:05 +0000 (0:00:02.613) 0:00:41.115 **** 2025-09-27 22:08:40.667648 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-27 22:08:40.667660 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-27 22:08:40.667670 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-27 22:08:40.667689 | orchestrator | 2025-09-27 22:08:40.667708 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-27 22:08:40.667727 | orchestrator | Saturday 27 September 2025 22:03:06 +0000 (0:00:01.658) 0:00:42.773 **** 2025-09-27 22:08:40.667746 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-27 22:08:40.667764 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-27 22:08:40.667781 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-27 22:08:40.667801 | orchestrator | 2025-09-27 22:08:40.667814 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-27 22:08:40.667826 | orchestrator | Saturday 27 September 2025 22:03:08 +0000 (0:00:01.474) 0:00:44.247 **** 2025-09-27 22:08:40.667839 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.667851 | orchestrator | 2025-09-27 22:08:40.667864 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-27 22:08:40.667877 | orchestrator | Saturday 27 September 2025 22:03:08 +0000 (0:00:00.678) 0:00:44.926 **** 2025-09-27 22:08:40.667892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-27 22:08:40.667908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-27 22:08:40.667953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-27 22:08:40.667968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 22:08:40.667982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 22:08:40.667995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 22:08:40.668008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 22:08:40.668021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 22:08:40.668040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 22:08:40.668053 | orchestrator | 2025-09-27 22:08:40.668066 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-27 22:08:40.668078 | orchestrator | Saturday 27 September 2025 22:03:12 +0000 (0:00:03.461) 0:00:48.388 **** 2025-09-27 22:08:40.668105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.668120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.668136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.668156 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.668173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.668184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.668267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.668279 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.668291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.668316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.668328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.668339 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.668351 | orchestrator | 2025-09-27 22:08:40.668362 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-27 22:08:40.668373 | orchestrator | Saturday 27 September 2025 22:03:13 +0000 (0:00:00.951) 0:00:49.340 **** 2025-09-27 22:08:40.668384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.668396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.668413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.668424 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.668436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.668459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.668472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.668483 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.668494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.668506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.668523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.668535 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.668546 | orchestrator | 2025-09-27 22:08:40.668558 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-27 22:08:40.668577 | orchestrator | Saturday 27 September 2025 22:03:14 +0000 (0:00:01.192) 0:00:50.532 **** 2025-09-27 22:08:40.668597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.668626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.668644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.668656 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.668667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.668679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.668701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.668712 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.668724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.668735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.668753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.668764 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.668775 | orchestrator | 2025-09-27 22:08:40.668791 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-27 22:08:40.668802 | orchestrator | Saturday 27 September 2025 22:03:16 +0000 (0:00:02.065) 0:00:52.598 **** 2025-09-27 22:08:40.668814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.668825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.668843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.668855 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.668866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.668877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.668889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.668900 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.668922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.668934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.668946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.668964 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.668975 | orchestrator | 2025-09-27 22:08:40.668986 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-27 22:08:40.669005 | orchestrator | Saturday 27 September 2025 22:03:17 +0000 (0:00:00.899) 0:00:53.498 **** 2025-09-27 22:08:40.669024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.669043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.669063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.669074 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.669097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.669110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.669121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.669143 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.669161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.669181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.669267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.669286 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.669304 | orchestrator | 2025-09-27 22:08:40.669324 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-27 22:08:40.669342 | orchestrator | Saturday 27 September 2025 22:03:18 +0000 (0:00:00.730) 0:00:54.228 **** 2025-09-27 22:08:40.669362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.669402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.669423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.669449 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.669461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.669474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.669494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.669516 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.669535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.669556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.669578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.669597 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.669608 | orchestrator | 2025-09-27 22:08:40.669619 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-27 22:08:40.669630 | orchestrator | Saturday 27 September 2025 22:03:19 +0000 (0:00:00.711) 0:00:54.939 **** 2025-09-27 22:08:40.669641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.669653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.669665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.669676 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.669687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.669698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.669723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.669740 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.669756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.669774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.669785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.669795 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.669805 | orchestrator | 2025-09-27 22:08:40.669815 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-27 22:08:40.669825 | orchestrator | Saturday 27 September 2025 22:03:19 +0000 (0:00:00.453) 0:00:55.393 **** 2025-09-27 22:08:40.669835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.669845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.669856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.669873 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.669894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.669905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.669915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.669926 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.669936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-27 22:08:40.669946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 22:08:40.669956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 22:08:40.669974 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.669992 | orchestrator | 2025-09-27 22:08:40.670005 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-27 22:08:40.670051 | orchestrator | Saturday 27 September 2025 22:03:20 +0000 (0:00:00.685) 0:00:56.079 **** 2025-09-27 22:08:40.670073 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-27 22:08:40.670084 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-27 22:08:40.670101 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-27 22:08:40.670111 | orchestrator | 2025-09-27 22:08:40.670121 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-27 22:08:40.670136 | orchestrator | Saturday 27 September 2025 22:03:21 +0000 (0:00:01.663) 0:00:57.743 **** 2025-09-27 22:08:40.670146 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-27 22:08:40.670156 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-27 22:08:40.670166 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-27 22:08:40.670175 | orchestrator | 2025-09-27 22:08:40.670185 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-27 22:08:40.670223 | orchestrator | Saturday 27 September 2025 22:03:23 +0000 (0:00:01.446) 0:00:59.190 **** 2025-09-27 22:08:40.670233 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-27 22:08:40.670243 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-27 22:08:40.670252 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-27 22:08:40.670262 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-27 22:08:40.670272 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.670281 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-27 22:08:40.670291 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-27 22:08:40.670301 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.670310 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.670320 | orchestrator | 2025-09-27 22:08:40.670329 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-27 22:08:40.670339 | orchestrator | Saturday 27 September 2025 22:03:24 +0000 (0:00:01.289) 0:01:00.479 **** 2025-09-27 22:08:40.670349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-27 22:08:40.670360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-27 22:08:40.670379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 22:08:40.670396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-27 22:08:40.670412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 22:08:40.670422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 22:08:40.670432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 22:08:40.670442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 22:08:40.670453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 22:08:40.670469 | orchestrator | 2025-09-27 22:08:40.670479 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-27 22:08:40.670488 | orchestrator | Saturday 27 September 2025 22:03:28 +0000 (0:00:03.678) 0:01:04.158 **** 2025-09-27 22:08:40.670498 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.670508 | orchestrator | 2025-09-27 22:08:40.670517 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-27 22:08:40.670527 | orchestrator | Saturday 27 September 2025 22:03:29 +0000 (0:00:01.550) 0:01:05.708 **** 2025-09-27 22:08:40.670537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-27 22:08:40.670555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-27 22:08:40.670566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.670576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.670586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-27 22:08:40.670629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-27 22:08:40.670640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.673046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.673090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-27 22:08:40.673101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-27 22:08:40.673112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.673142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.673159 | orchestrator | 2025-09-27 22:08:40.673177 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-27 22:08:40.673217 | orchestrator | Saturday 27 September 2025 22:03:35 +0000 (0:00:05.768) 0:01:11.477 **** 2025-09-27 22:08:40.673229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-27 22:08:40.673255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-27 22:08:40.673266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.673276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.673286 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.673297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-27 22:08:40.673314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-27 22:08:40.673325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.673335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.673345 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.673368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-27 22:08:40.673379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-27 22:08:40.673389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.673413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.673428 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.673438 | orchestrator | 2025-09-27 22:08:40.673448 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-27 22:08:40.673463 | orchestrator | Saturday 27 September 2025 22:03:36 +0000 (0:00:01.330) 0:01:12.807 **** 2025-09-27 22:08:40.673481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-27 22:08:40.673493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-27 22:08:40.673504 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.673514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-27 22:08:40.673523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-27 22:08:40.673533 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.673543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-27 22:08:40.673553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-27 22:08:40.673563 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.673573 | orchestrator | 2025-09-27 22:08:40.673597 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-27 22:08:40.673610 | orchestrator | Saturday 27 September 2025 22:03:38 +0000 (0:00:01.598) 0:01:14.405 **** 2025-09-27 22:08:40.673626 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.673637 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.673648 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.673659 | orchestrator | 2025-09-27 22:08:40.673674 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-27 22:08:40.673692 | orchestrator | Saturday 27 September 2025 22:03:39 +0000 (0:00:01.483) 0:01:15.888 **** 2025-09-27 22:08:40.673705 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.673716 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.673727 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.673739 | orchestrator | 2025-09-27 22:08:40.673750 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-27 22:08:40.673761 | orchestrator | Saturday 27 September 2025 22:03:42 +0000 (0:00:02.144) 0:01:18.033 **** 2025-09-27 22:08:40.673780 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.673792 | orchestrator | 2025-09-27 22:08:40.673803 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-27 22:08:40.673814 | orchestrator | Saturday 27 September 2025 22:03:43 +0000 (0:00:01.141) 0:01:19.175 **** 2025-09-27 22:08:40.673826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 22:08:40.673840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.673852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.673866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 22:08:40.673889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.673908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.673920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 22:08:40.673931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.673943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.673955 | orchestrator | 2025-09-27 22:08:40.673965 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-27 22:08:40.673974 | orchestrator | Saturday 27 September 2025 22:03:47 +0000 (0:00:04.573) 0:01:23.749 **** 2025-09-27 22:08:40.673995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 22:08:40.674042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.674055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.674066 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.674076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 22:08:40.674086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.674096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.674106 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.674131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 22:08:40.674148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.674159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.674169 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.674179 | orchestrator | 2025-09-27 22:08:40.674244 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-27 22:08:40.674255 | orchestrator | Saturday 27 September 2025 22:03:48 +0000 (0:00:01.129) 0:01:24.878 **** 2025-09-27 22:08:40.674265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-27 22:08:40.674276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-27 22:08:40.674286 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.674296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-27 22:08:40.674306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-27 22:08:40.674315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-27 22:08:40.674325 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.674335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-27 22:08:40.674345 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.674354 | orchestrator | 2025-09-27 22:08:40.674364 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-27 22:08:40.674380 | orchestrator | Saturday 27 September 2025 22:03:49 +0000 (0:00:00.766) 0:01:25.645 **** 2025-09-27 22:08:40.674390 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.674400 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.674409 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.674419 | orchestrator | 2025-09-27 22:08:40.674428 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-27 22:08:40.674438 | orchestrator | Saturday 27 September 2025 22:03:51 +0000 (0:00:01.318) 0:01:26.963 **** 2025-09-27 22:08:40.674447 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.674457 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.674466 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.674476 | orchestrator | 2025-09-27 22:08:40.674491 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-27 22:08:40.674506 | orchestrator | Saturday 27 September 2025 22:03:52 +0000 (0:00:01.923) 0:01:28.886 **** 2025-09-27 22:08:40.674516 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.674525 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.674535 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.674544 | orchestrator | 2025-09-27 22:08:40.674554 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-27 22:08:40.674563 | orchestrator | Saturday 27 September 2025 22:03:53 +0000 (0:00:00.287) 0:01:29.174 **** 2025-09-27 22:08:40.674573 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.674582 | orchestrator | 2025-09-27 22:08:40.674592 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-27 22:08:40.674601 | orchestrator | Saturday 27 September 2025 22:03:53 +0000 (0:00:00.649) 0:01:29.823 **** 2025-09-27 22:08:40.674612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-27 22:08:40.674622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-27 22:08:40.674633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-27 22:08:40.674649 | orchestrator | 2025-09-27 22:08:40.674659 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-27 22:08:40.674668 | orchestrator | Saturday 27 September 2025 22:03:56 +0000 (0:00:02.361) 0:01:32.184 **** 2025-09-27 22:08:40.674684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-27 22:08:40.674694 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.674708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-27 22:08:40.674719 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.674729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-27 22:08:40.674739 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.674749 | orchestrator | 2025-09-27 22:08:40.674758 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-27 22:08:40.674768 | orchestrator | Saturday 27 September 2025 22:03:57 +0000 (0:00:01.395) 0:01:33.580 **** 2025-09-27 22:08:40.674779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-27 22:08:40.674797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-27 22:08:40.674806 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.674814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-27 22:08:40.674823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-27 22:08:40.674831 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.674843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-27 22:08:40.674856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-27 22:08:40.674864 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.674872 | orchestrator | 2025-09-27 22:08:40.674880 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-27 22:08:40.674888 | orchestrator | Saturday 27 September 2025 22:03:59 +0000 (0:00:01.744) 0:01:35.324 **** 2025-09-27 22:08:40.674896 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.674903 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.674911 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.674919 | orchestrator | 2025-09-27 22:08:40.674927 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-27 22:08:40.674934 | orchestrator | Saturday 27 September 2025 22:03:59 +0000 (0:00:00.554) 0:01:35.879 **** 2025-09-27 22:08:40.674942 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.674950 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.674958 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.674966 | orchestrator | 2025-09-27 22:08:40.674974 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-27 22:08:40.674982 | orchestrator | Saturday 27 September 2025 22:04:00 +0000 (0:00:01.011) 0:01:36.891 **** 2025-09-27 22:08:40.674989 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.674997 | orchestrator | 2025-09-27 22:08:40.675005 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-27 22:08:40.675013 | orchestrator | Saturday 27 September 2025 22:04:01 +0000 (0:00:00.647) 0:01:37.538 **** 2025-09-27 22:08:40.675021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 22:08:40.675037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 22:08:40.675083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 22:08:40.675130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675159 | orchestrator | 2025-09-27 22:08:40.675167 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-27 22:08:40.675175 | orchestrator | Saturday 27 September 2025 22:04:04 +0000 (0:00:03.061) 0:01:40.600 **** 2025-09-27 22:08:40.675184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 22:08:40.675214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 22:08:40.675224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675270 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.675290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675299 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.675307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 22:08:40.675321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675346 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.675354 | orchestrator | 2025-09-27 22:08:40.675362 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-27 22:08:40.675369 | orchestrator | Saturday 27 September 2025 22:04:05 +0000 (0:00:01.274) 0:01:41.874 **** 2025-09-27 22:08:40.675378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-27 22:08:40.675451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-27 22:08:40.675463 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.675471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-27 22:08:40.675479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-27 22:08:40.675492 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.675500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-27 22:08:40.675508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-27 22:08:40.675516 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.675524 | orchestrator | 2025-09-27 22:08:40.675532 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-27 22:08:40.675539 | orchestrator | Saturday 27 September 2025 22:04:06 +0000 (0:00:00.901) 0:01:42.776 **** 2025-09-27 22:08:40.675547 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.675555 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.675563 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.675570 | orchestrator | 2025-09-27 22:08:40.675578 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-27 22:08:40.675586 | orchestrator | Saturday 27 September 2025 22:04:08 +0000 (0:00:01.208) 0:01:43.985 **** 2025-09-27 22:08:40.675594 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.675602 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.675609 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.675617 | orchestrator | 2025-09-27 22:08:40.675625 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-27 22:08:40.675633 | orchestrator | Saturday 27 September 2025 22:04:10 +0000 (0:00:02.036) 0:01:46.021 **** 2025-09-27 22:08:40.675640 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.675648 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.675656 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.675664 | orchestrator | 2025-09-27 22:08:40.675672 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-27 22:08:40.675679 | orchestrator | Saturday 27 September 2025 22:04:10 +0000 (0:00:00.544) 0:01:46.566 **** 2025-09-27 22:08:40.675687 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.675695 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.675702 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.675710 | orchestrator | 2025-09-27 22:08:40.675718 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-27 22:08:40.675725 | orchestrator | Saturday 27 September 2025 22:04:10 +0000 (0:00:00.340) 0:01:46.907 **** 2025-09-27 22:08:40.675733 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.675741 | orchestrator | 2025-09-27 22:08:40.675748 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-27 22:08:40.675756 | orchestrator | Saturday 27 September 2025 22:04:11 +0000 (0:00:00.805) 0:01:47.713 **** 2025-09-27 22:08:40.675764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 22:08:40.675777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 22:08:40.675794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 22:08:40.675829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 22:08:40.675866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 22:08:40.675930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 22:08:40.675939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.675984 | orchestrator | 2025-09-27 22:08:40.675993 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-27 22:08:40.676000 | orchestrator | Saturday 27 September 2025 22:04:15 +0000 (0:00:04.168) 0:01:51.881 **** 2025-09-27 22:08:40.676017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 22:08:40.676026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 22:08:40.676034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.676042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.676050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.676063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.676079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.676088 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.676096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 22:08:40.676105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 22:08:40.676113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.676121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.676134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.676146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 22:08:40.676174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.676183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 22:08:40.676208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.676216 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.676224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.676237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.676246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.676262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.676271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.676279 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.676287 | orchestrator | 2025-09-27 22:08:40.676295 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-27 22:08:40.676303 | orchestrator | Saturday 27 September 2025 22:04:16 +0000 (0:00:00.865) 0:01:52.746 **** 2025-09-27 22:08:40.676311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-27 22:08:40.676319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-27 22:08:40.676327 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.676335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-27 22:08:40.676343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-27 22:08:40.676356 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.676364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-27 22:08:40.676372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-27 22:08:40.676380 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.676388 | orchestrator | 2025-09-27 22:08:40.676396 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-27 22:08:40.676404 | orchestrator | Saturday 27 September 2025 22:04:17 +0000 (0:00:00.966) 0:01:53.713 **** 2025-09-27 22:08:40.676411 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.676419 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.676427 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.676435 | orchestrator | 2025-09-27 22:08:40.676443 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-27 22:08:40.676450 | orchestrator | Saturday 27 September 2025 22:04:19 +0000 (0:00:01.273) 0:01:54.987 **** 2025-09-27 22:08:40.676458 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.676466 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.676473 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.676481 | orchestrator | 2025-09-27 22:08:40.676489 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-27 22:08:40.676497 | orchestrator | Saturday 27 September 2025 22:04:21 +0000 (0:00:02.042) 0:01:57.029 **** 2025-09-27 22:08:40.676504 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.676512 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.676520 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.676527 | orchestrator | 2025-09-27 22:08:40.676535 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-27 22:08:40.676543 | orchestrator | Saturday 27 September 2025 22:04:21 +0000 (0:00:00.521) 0:01:57.551 **** 2025-09-27 22:08:40.676551 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.676558 | orchestrator | 2025-09-27 22:08:40.676566 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-27 22:08:40.676574 | orchestrator | Saturday 27 September 2025 22:04:22 +0000 (0:00:00.829) 0:01:58.380 **** 2025-09-27 22:08:40.676594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-27 22:08:40.676610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-27 22:08:40.676630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-27 22:08:40.676641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-27 22:08:40.676665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-27 22:08:40.676676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-27 22:08:40.676690 | orchestrator | 2025-09-27 22:08:40.676698 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-27 22:08:40.676706 | orchestrator | Saturday 27 September 2025 22:04:26 +0000 (0:00:04.170) 0:02:02.550 **** 2025-09-27 22:08:40.676719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-27 22:08:40.676733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-27 22:08:40.676749 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.676758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-27 22:08:40.676776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-27 22:08:40.676790 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.676799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-27 22:08:40.676817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-27 22:08:40.676835 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.676843 | orchestrator | 2025-09-27 22:08:40.676851 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-27 22:08:40.676859 | orchestrator | Saturday 27 September 2025 22:04:29 +0000 (0:00:03.004) 0:02:05.554 **** 2025-09-27 22:08:40.676867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-27 22:08:40.676876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-27 22:08:40.676884 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.676892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-27 22:08:40.676901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-27 22:08:40.676909 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.676917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-27 22:08:40.676933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-27 22:08:40.676947 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.676955 | orchestrator | 2025-09-27 22:08:40.676963 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-27 22:08:40.676971 | orchestrator | Saturday 27 September 2025 22:04:32 +0000 (0:00:03.156) 0:02:08.711 **** 2025-09-27 22:08:40.676979 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.676987 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.676994 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.677002 | orchestrator | 2025-09-27 22:08:40.677010 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-27 22:08:40.677018 | orchestrator | Saturday 27 September 2025 22:04:33 +0000 (0:00:01.169) 0:02:09.881 **** 2025-09-27 22:08:40.677026 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.677033 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.677041 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.677049 | orchestrator | 2025-09-27 22:08:40.677057 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-27 22:08:40.677064 | orchestrator | Saturday 27 September 2025 22:04:35 +0000 (0:00:01.811) 0:02:11.692 **** 2025-09-27 22:08:40.677072 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.677080 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.677088 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.677095 | orchestrator | 2025-09-27 22:08:40.677103 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-27 22:08:40.677111 | orchestrator | Saturday 27 September 2025 22:04:36 +0000 (0:00:00.433) 0:02:12.126 **** 2025-09-27 22:08:40.677119 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.677126 | orchestrator | 2025-09-27 22:08:40.677134 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-27 22:08:40.677142 | orchestrator | Saturday 27 September 2025 22:04:37 +0000 (0:00:00.821) 0:02:12.947 **** 2025-09-27 22:08:40.677150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 22:08:40.677159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 22:08:40.677168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 22:08:40.677180 | orchestrator | 2025-09-27 22:08:40.677227 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-27 22:08:40.677237 | orchestrator | Saturday 27 September 2025 22:04:40 +0000 (0:00:03.204) 0:02:16.152 **** 2025-09-27 22:08:40.677255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-27 22:08:40.677264 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.677272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-27 22:08:40.677280 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.677289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-27 22:08:40.677297 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.677305 | orchestrator | 2025-09-27 22:08:40.677313 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-27 22:08:40.677320 | orchestrator | Saturday 27 September 2025 22:04:40 +0000 (0:00:00.567) 0:02:16.720 **** 2025-09-27 22:08:40.677328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-27 22:08:40.677336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-27 22:08:40.677344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-27 22:08:40.677352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-27 22:08:40.677360 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.677368 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.677376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-27 22:08:40.677389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-27 22:08:40.677396 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.677403 | orchestrator | 2025-09-27 22:08:40.677410 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-27 22:08:40.677416 | orchestrator | Saturday 27 September 2025 22:04:41 +0000 (0:00:00.617) 0:02:17.337 **** 2025-09-27 22:08:40.677423 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.677430 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.677436 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.677443 | orchestrator | 2025-09-27 22:08:40.677449 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-27 22:08:40.677456 | orchestrator | Saturday 27 September 2025 22:04:42 +0000 (0:00:01.147) 0:02:18.485 **** 2025-09-27 22:08:40.677463 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.677469 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.677476 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.677482 | orchestrator | 2025-09-27 22:08:40.677489 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-27 22:08:40.677495 | orchestrator | Saturday 27 September 2025 22:04:44 +0000 (0:00:01.931) 0:02:20.416 **** 2025-09-27 22:08:40.677502 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.677509 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.677519 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.677526 | orchestrator | 2025-09-27 22:08:40.677533 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-27 22:08:40.677543 | orchestrator | Saturday 27 September 2025 22:04:45 +0000 (0:00:00.581) 0:02:20.998 **** 2025-09-27 22:08:40.677550 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.677556 | orchestrator | 2025-09-27 22:08:40.677563 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-27 22:08:40.677569 | orchestrator | Saturday 27 September 2025 22:04:46 +0000 (0:00:00.980) 0:02:21.979 **** 2025-09-27 22:08:40.677577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 22:08:40.677599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 22:08:40.677608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 22:08:40.677619 | orchestrator | 2025-09-27 22:08:40.677626 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-27 22:08:40.677633 | orchestrator | Saturday 27 September 2025 22:04:49 +0000 (0:00:03.587) 0:02:25.567 **** 2025-09-27 22:08:40.677648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-27 22:08:40.677656 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.677664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-27 22:08:40.677677 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.677693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-27 22:08:40.677701 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.677707 | orchestrator | 2025-09-27 22:08:40.677714 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-27 22:08:40.677721 | orchestrator | Saturday 27 September 2025 22:04:50 +0000 (0:00:01.340) 0:02:26.908 **** 2025-09-27 22:08:40.677728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-27 22:08:40.677740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-27 22:08:40.677747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-27 22:08:40.677754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-27 22:08:40.677761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-27 22:08:40.677768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-27 22:08:40.677775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-27 22:08:40.677781 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.677788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-27 22:08:40.677803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-27 22:08:40.677810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-27 22:08:40.677817 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.677824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-27 22:08:40.677831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-27 22:08:40.677838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-27 22:08:40.677851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-27 22:08:40.677858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-27 22:08:40.677864 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.677871 | orchestrator | 2025-09-27 22:08:40.677878 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-27 22:08:40.677884 | orchestrator | Saturday 27 September 2025 22:04:52 +0000 (0:00:01.229) 0:02:28.137 **** 2025-09-27 22:08:40.677891 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.677898 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.677904 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.677911 | orchestrator | 2025-09-27 22:08:40.677918 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-27 22:08:40.677924 | orchestrator | Saturday 27 September 2025 22:04:53 +0000 (0:00:01.281) 0:02:29.418 **** 2025-09-27 22:08:40.677931 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.677937 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.677944 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.677951 | orchestrator | 2025-09-27 22:08:40.677957 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-27 22:08:40.677964 | orchestrator | Saturday 27 September 2025 22:04:55 +0000 (0:00:01.997) 0:02:31.415 **** 2025-09-27 22:08:40.677971 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.677977 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.677984 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.677990 | orchestrator | 2025-09-27 22:08:40.677997 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-27 22:08:40.678003 | orchestrator | Saturday 27 September 2025 22:04:55 +0000 (0:00:00.320) 0:02:31.736 **** 2025-09-27 22:08:40.678010 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.678038 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.678045 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.678052 | orchestrator | 2025-09-27 22:08:40.678060 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-27 22:08:40.678067 | orchestrator | Saturday 27 September 2025 22:04:56 +0000 (0:00:00.550) 0:02:32.286 **** 2025-09-27 22:08:40.678073 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.678080 | orchestrator | 2025-09-27 22:08:40.678087 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-27 22:08:40.678093 | orchestrator | Saturday 27 September 2025 22:04:57 +0000 (0:00:01.030) 0:02:33.316 **** 2025-09-27 22:08:40.678114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 22:08:40.678127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 22:08:40.678135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 22:08:40.678143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 22:08:40.678151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 22:08:40.678158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 22:08:40.678173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 22:08:40.678186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 22:08:40.678206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 22:08:40.678213 | orchestrator | 2025-09-27 22:08:40.678219 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-27 22:08:40.678227 | orchestrator | Saturday 27 September 2025 22:05:00 +0000 (0:00:03.389) 0:02:36.706 **** 2025-09-27 22:08:40.678234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-27 22:08:40.678241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 22:08:40.678258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 22:08:40.678269 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.678277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-27 22:08:40.678284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 22:08:40.678291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 22:08:40.678298 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.678305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-27 22:08:40.678320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 22:08:40.678332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 22:08:40.678339 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.678346 | orchestrator | 2025-09-27 22:08:40.678353 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-27 22:08:40.678359 | orchestrator | Saturday 27 September 2025 22:05:01 +0000 (0:00:00.831) 0:02:37.537 **** 2025-09-27 22:08:40.678366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-27 22:08:40.678374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-27 22:08:40.678381 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.678387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-27 22:08:40.678394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-27 22:08:40.678401 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.678408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-27 22:08:40.678415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-27 22:08:40.678422 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.678429 | orchestrator | 2025-09-27 22:08:40.678435 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-27 22:08:40.678442 | orchestrator | Saturday 27 September 2025 22:05:02 +0000 (0:00:00.827) 0:02:38.366 **** 2025-09-27 22:08:40.678449 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.678455 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.678462 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.678469 | orchestrator | 2025-09-27 22:08:40.678475 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-27 22:08:40.678482 | orchestrator | Saturday 27 September 2025 22:05:03 +0000 (0:00:01.269) 0:02:39.635 **** 2025-09-27 22:08:40.678493 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.678499 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.678506 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.678512 | orchestrator | 2025-09-27 22:08:40.678519 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-27 22:08:40.678526 | orchestrator | Saturday 27 September 2025 22:05:05 +0000 (0:00:02.137) 0:02:41.772 **** 2025-09-27 22:08:40.678532 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.678539 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.678545 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.678552 | orchestrator | 2025-09-27 22:08:40.678558 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-27 22:08:40.678565 | orchestrator | Saturday 27 September 2025 22:05:06 +0000 (0:00:00.532) 0:02:42.305 **** 2025-09-27 22:08:40.678572 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.678578 | orchestrator | 2025-09-27 22:08:40.678585 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-27 22:08:40.678592 | orchestrator | Saturday 27 September 2025 22:05:07 +0000 (0:00:01.002) 0:02:43.307 **** 2025-09-27 22:08:40.678610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 22:08:40.678618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.678626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 22:08:40.678633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.678644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 22:08:40.678659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.678667 | orchestrator | 2025-09-27 22:08:40.678674 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-27 22:08:40.678680 | orchestrator | Saturday 27 September 2025 22:05:11 +0000 (0:00:03.934) 0:02:47.242 **** 2025-09-27 22:08:40.678687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 22:08:40.678694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.678709 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.678716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 22:08:40.678727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.678737 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.678745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 22:08:40.678752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.678758 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.678765 | orchestrator | 2025-09-27 22:08:40.678772 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-27 22:08:40.678778 | orchestrator | Saturday 27 September 2025 22:05:12 +0000 (0:00:00.925) 0:02:48.167 **** 2025-09-27 22:08:40.678785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-27 22:08:40.678798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-27 22:08:40.678805 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.678824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-27 22:08:40.678831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-27 22:08:40.678838 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.678852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-27 22:08:40.678859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-27 22:08:40.678866 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.678873 | orchestrator | 2025-09-27 22:08:40.678879 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-27 22:08:40.678886 | orchestrator | Saturday 27 September 2025 22:05:13 +0000 (0:00:00.875) 0:02:49.043 **** 2025-09-27 22:08:40.678892 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.678899 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.678905 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.678912 | orchestrator | 2025-09-27 22:08:40.678918 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-27 22:08:40.678925 | orchestrator | Saturday 27 September 2025 22:05:14 +0000 (0:00:01.312) 0:02:50.356 **** 2025-09-27 22:08:40.678932 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.678938 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.678945 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.678951 | orchestrator | 2025-09-27 22:08:40.678958 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-27 22:08:40.678964 | orchestrator | Saturday 27 September 2025 22:05:16 +0000 (0:00:02.013) 0:02:52.369 **** 2025-09-27 22:08:40.678975 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.678982 | orchestrator | 2025-09-27 22:08:40.678989 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-27 22:08:40.678999 | orchestrator | Saturday 27 September 2025 22:05:17 +0000 (0:00:01.255) 0:02:53.625 **** 2025-09-27 22:08:40.679006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-27 22:08:40.679013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.679025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.679032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.679039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-27 22:08:40.679050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.679069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.679076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.679089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-27 22:08:40.679096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.679103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.679114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.679121 | orchestrator | 2025-09-27 22:08:40.679128 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-27 22:08:40.679138 | orchestrator | Saturday 27 September 2025 22:05:21 +0000 (0:00:03.658) 0:02:57.283 **** 2025-09-27 22:08:40.679145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-27 22:08:40.679156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-27 22:08:40.679163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.679171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.679177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.679204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.679212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.679225 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.679232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.679239 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.679246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-27 22:08:40.679253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.679260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.679271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.679281 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.679288 | orchestrator | 2025-09-27 22:08:40.679295 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-27 22:08:40.679302 | orchestrator | Saturday 27 September 2025 22:05:22 +0000 (0:00:00.668) 0:02:57.952 **** 2025-09-27 22:08:40.679315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-27 22:08:40.679322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-27 22:08:40.679328 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.679335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-27 22:08:40.679342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-27 22:08:40.679349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-27 22:08:40.679355 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.679362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-27 22:08:40.679368 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.679375 | orchestrator | 2025-09-27 22:08:40.679382 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-27 22:08:40.679388 | orchestrator | Saturday 27 September 2025 22:05:23 +0000 (0:00:01.410) 0:02:59.362 **** 2025-09-27 22:08:40.679395 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.679402 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.679408 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.679415 | orchestrator | 2025-09-27 22:08:40.679421 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-27 22:08:40.679428 | orchestrator | Saturday 27 September 2025 22:05:24 +0000 (0:00:01.250) 0:03:00.613 **** 2025-09-27 22:08:40.679435 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.679441 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.679448 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.679454 | orchestrator | 2025-09-27 22:08:40.679461 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-27 22:08:40.679468 | orchestrator | Saturday 27 September 2025 22:05:26 +0000 (0:00:01.866) 0:03:02.479 **** 2025-09-27 22:08:40.679474 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.679481 | orchestrator | 2025-09-27 22:08:40.679487 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-27 22:08:40.679494 | orchestrator | Saturday 27 September 2025 22:05:27 +0000 (0:00:01.304) 0:03:03.784 **** 2025-09-27 22:08:40.679501 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-27 22:08:40.679507 | orchestrator | 2025-09-27 22:08:40.679514 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-27 22:08:40.679521 | orchestrator | Saturday 27 September 2025 22:05:30 +0000 (0:00:02.781) 0:03:06.565 **** 2025-09-27 22:08:40.679536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 22:08:40.679549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-27 22:08:40.679556 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.679563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 22:08:40.679571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-27 22:08:40.679582 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.679597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 22:08:40.679605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-27 22:08:40.679612 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.679619 | orchestrator | 2025-09-27 22:08:40.679626 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-27 22:08:40.679632 | orchestrator | Saturday 27 September 2025 22:05:32 +0000 (0:00:02.091) 0:03:08.657 **** 2025-09-27 22:08:40.679640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 22:08:40.679661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-27 22:08:40.679669 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.679676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 22:08:40.679683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-27 22:08:40.679690 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.679710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 22:08:40.679718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-27 22:08:40.679725 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.679732 | orchestrator | 2025-09-27 22:08:40.679739 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-27 22:08:40.679745 | orchestrator | Saturday 27 September 2025 22:05:34 +0000 (0:00:02.252) 0:03:10.909 **** 2025-09-27 22:08:40.679752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-27 22:08:40.679760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-27 22:08:40.679771 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.679778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-27 22:08:40.679785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-27 22:08:40.679792 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.679806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-27 22:08:40.679814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-27 22:08:40.679821 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.679828 | orchestrator | 2025-09-27 22:08:40.679834 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-27 22:08:40.679841 | orchestrator | Saturday 27 September 2025 22:05:37 +0000 (0:00:02.702) 0:03:13.611 **** 2025-09-27 22:08:40.679848 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.679857 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.679868 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.679879 | orchestrator | 2025-09-27 22:08:40.679889 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-27 22:08:40.679900 | orchestrator | Saturday 27 September 2025 22:05:39 +0000 (0:00:01.939) 0:03:15.551 **** 2025-09-27 22:08:40.679910 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.679920 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.679931 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.679942 | orchestrator | 2025-09-27 22:08:40.679953 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-27 22:08:40.679963 | orchestrator | Saturday 27 September 2025 22:05:41 +0000 (0:00:01.457) 0:03:17.008 **** 2025-09-27 22:08:40.679974 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.679985 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.679995 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.680007 | orchestrator | 2025-09-27 22:08:40.680019 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-27 22:08:40.680039 | orchestrator | Saturday 27 September 2025 22:05:41 +0000 (0:00:00.312) 0:03:17.320 **** 2025-09-27 22:08:40.680050 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.680060 | orchestrator | 2025-09-27 22:08:40.680071 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-27 22:08:40.680082 | orchestrator | Saturday 27 September 2025 22:05:42 +0000 (0:00:01.341) 0:03:18.662 **** 2025-09-27 22:08:40.680093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-27 22:08:40.680106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-27 22:08:40.680124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-27 22:08:40.680132 | orchestrator | 2025-09-27 22:08:40.680139 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-27 22:08:40.680145 | orchestrator | Saturday 27 September 2025 22:05:44 +0000 (0:00:01.499) 0:03:20.162 **** 2025-09-27 22:08:40.680152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-27 22:08:40.680159 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.680166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-27 22:08:40.680179 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.680186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-27 22:08:40.680212 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.680219 | orchestrator | 2025-09-27 22:08:40.680226 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-27 22:08:40.680233 | orchestrator | Saturday 27 September 2025 22:05:44 +0000 (0:00:00.428) 0:03:20.590 **** 2025-09-27 22:08:40.680240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-27 22:08:40.680248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-27 22:08:40.680255 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.680261 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.680273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-27 22:08:40.680280 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.680287 | orchestrator | 2025-09-27 22:08:40.680297 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-27 22:08:40.680305 | orchestrator | Saturday 27 September 2025 22:05:45 +0000 (0:00:00.638) 0:03:21.229 **** 2025-09-27 22:08:40.680311 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.680318 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.680324 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.680331 | orchestrator | 2025-09-27 22:08:40.680338 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-27 22:08:40.680344 | orchestrator | Saturday 27 September 2025 22:05:46 +0000 (0:00:00.807) 0:03:22.036 **** 2025-09-27 22:08:40.680351 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.680358 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.680364 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.680371 | orchestrator | 2025-09-27 22:08:40.680377 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-27 22:08:40.680384 | orchestrator | Saturday 27 September 2025 22:05:47 +0000 (0:00:01.279) 0:03:23.315 **** 2025-09-27 22:08:40.680396 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.680403 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.680409 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.680416 | orchestrator | 2025-09-27 22:08:40.680423 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-27 22:08:40.680429 | orchestrator | Saturday 27 September 2025 22:05:47 +0000 (0:00:00.320) 0:03:23.636 **** 2025-09-27 22:08:40.680436 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.680443 | orchestrator | 2025-09-27 22:08:40.680449 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-27 22:08:40.680456 | orchestrator | Saturday 27 September 2025 22:05:49 +0000 (0:00:01.425) 0:03:25.062 **** 2025-09-27 22:08:40.680463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 22:08:40.680470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.680478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.680493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.680506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-27 22:08:40.680514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.680521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 22:08:40.680528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 22:08:40.680536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.680673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 22:08:40.680691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:08:40.680699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.680706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.680714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-27 22:08:40.680721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 22:08:40.680728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.680747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.680761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-27 22:08:40.680769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.680776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:08:40.680783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-27 22:08:40.680798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 22:08:40.680810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.680817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.680824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.680831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 22:08:40.680839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.680857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-27 22:08:40.680865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 22:08:40.680872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.680879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.680886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:08:40.680892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 22:08:40.680903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.680918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-27 22:08:40.680925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 22:08:40.680932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 22:08:40.680939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.680946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.680953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-27 22:08:40.680972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:08:40.680984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:08:40.680995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-27 22:08:40.681018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 22:08:40.681029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-27 22:08:40.681071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:08:40.681082 | orchestrator | 2025-09-27 22:08:40.681092 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-27 22:08:40.681103 | orchestrator | Saturday 27 September 2025 22:05:54 +0000 (0:00:04.971) 0:03:30.033 **** 2025-09-27 22:08:40.681115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:08:40.681126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:08:40.681170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-27 22:08:40.681265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-27 22:08:40.681282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 22:08:40.681307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 22:08:40.681321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 22:08:40.681332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 22:08:40.681353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:08:40.681361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:08:40.681394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-27 22:08:40.681416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-27 22:08:40.681423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 22:08:40.681432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 22:08:40.681452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-27 22:08:40.681469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:08:40.681477 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.681485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:08:40.681502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-27 22:08:40.681515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:08:40.681546 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.681554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-27 22:08:40.681577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 22:08:40.681593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 22:08:40.681605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:08:40.681623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-27 22:08:40.681643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 22:08:40.681651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.681716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-27 22:08:40.681732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:08:40.681739 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.681746 | orchestrator | 2025-09-27 22:08:40.681753 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-27 22:08:40.681760 | orchestrator | Saturday 27 September 2025 22:05:55 +0000 (0:00:01.744) 0:03:31.777 **** 2025-09-27 22:08:40.681767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-27 22:08:40.681780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-27 22:08:40.681787 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.681794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-27 22:08:40.681801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-27 22:08:40.681808 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.681815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-27 22:08:40.681824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-27 22:08:40.681834 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.681845 | orchestrator | 2025-09-27 22:08:40.681856 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-27 22:08:40.681866 | orchestrator | Saturday 27 September 2025 22:05:57 +0000 (0:00:01.678) 0:03:33.456 **** 2025-09-27 22:08:40.681877 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.681887 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.681898 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.681908 | orchestrator | 2025-09-27 22:08:40.681918 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-27 22:08:40.681927 | orchestrator | Saturday 27 September 2025 22:05:58 +0000 (0:00:01.203) 0:03:34.659 **** 2025-09-27 22:08:40.681937 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.681948 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.681959 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.681970 | orchestrator | 2025-09-27 22:08:40.681981 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-27 22:08:40.681991 | orchestrator | Saturday 27 September 2025 22:06:00 +0000 (0:00:02.016) 0:03:36.675 **** 2025-09-27 22:08:40.682001 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.682012 | orchestrator | 2025-09-27 22:08:40.682074 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-27 22:08:40.682086 | orchestrator | Saturday 27 September 2025 22:06:01 +0000 (0:00:01.199) 0:03:37.875 **** 2025-09-27 22:08:40.682115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 22:08:40.682131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 22:08:40.682149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 22:08:40.682156 | orchestrator | 2025-09-27 22:08:40.682163 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-27 22:08:40.682170 | orchestrator | Saturday 27 September 2025 22:06:05 +0000 (0:00:03.888) 0:03:41.764 **** 2025-09-27 22:08:40.682177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 22:08:40.682184 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.682308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 22:08:40.682317 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.682324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 22:08:40.682336 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.682343 | orchestrator | 2025-09-27 22:08:40.682350 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-27 22:08:40.682360 | orchestrator | Saturday 27 September 2025 22:06:06 +0000 (0:00:00.527) 0:03:42.292 **** 2025-09-27 22:08:40.682371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-27 22:08:40.682384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-27 22:08:40.682395 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.682405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-27 22:08:40.682416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-27 22:08:40.682426 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.682437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-27 22:08:40.682448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-27 22:08:40.682458 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.682469 | orchestrator | 2025-09-27 22:08:40.682476 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-27 22:08:40.682482 | orchestrator | Saturday 27 September 2025 22:06:07 +0000 (0:00:00.869) 0:03:43.162 **** 2025-09-27 22:08:40.682488 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.682494 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.682500 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.682506 | orchestrator | 2025-09-27 22:08:40.682513 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-27 22:08:40.682519 | orchestrator | Saturday 27 September 2025 22:06:08 +0000 (0:00:01.424) 0:03:44.586 **** 2025-09-27 22:08:40.682525 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.682531 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.682537 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.682544 | orchestrator | 2025-09-27 22:08:40.682550 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-27 22:08:40.682556 | orchestrator | Saturday 27 September 2025 22:06:10 +0000 (0:00:02.198) 0:03:46.784 **** 2025-09-27 22:08:40.682563 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.682569 | orchestrator | 2025-09-27 22:08:40.682575 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-27 22:08:40.682587 | orchestrator | Saturday 27 September 2025 22:06:12 +0000 (0:00:01.584) 0:03:48.368 **** 2025-09-27 22:08:40.682605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 22:08:40.682612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.682619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.682626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 22:08:40.682633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.682652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.682659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 22:08:40.682666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.682673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.682679 | orchestrator | 2025-09-27 22:08:40.682685 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-27 22:08:40.682691 | orchestrator | Saturday 27 September 2025 22:06:16 +0000 (0:00:04.411) 0:03:52.780 **** 2025-09-27 22:08:40.682702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 22:08:40.682718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.682725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.682731 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.682738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 22:08:40.682745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.682755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.682762 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.682780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 22:08:40.682787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.682794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.682800 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.682806 | orchestrator | 2025-09-27 22:08:40.682813 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-27 22:08:40.682819 | orchestrator | Saturday 27 September 2025 22:06:18 +0000 (0:00:01.241) 0:03:54.021 **** 2025-09-27 22:08:40.682825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-27 22:08:40.682833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-27 22:08:40.682844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-27 22:08:40.682850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-27 22:08:40.682856 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.682863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-27 22:08:40.682869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-27 22:08:40.682876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-27 22:08:40.682882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-27 22:08:40.682893 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.682903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-27 22:08:40.682909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-27 22:08:40.682916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-27 22:08:40.682922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-27 22:08:40.682928 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.682934 | orchestrator | 2025-09-27 22:08:40.682941 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-27 22:08:40.682947 | orchestrator | Saturday 27 September 2025 22:06:18 +0000 (0:00:00.883) 0:03:54.905 **** 2025-09-27 22:08:40.682953 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.682959 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.682965 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.682971 | orchestrator | 2025-09-27 22:08:40.682978 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-27 22:08:40.682984 | orchestrator | Saturday 27 September 2025 22:06:20 +0000 (0:00:01.327) 0:03:56.232 **** 2025-09-27 22:08:40.682991 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.683002 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.683012 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.683022 | orchestrator | 2025-09-27 22:08:40.683032 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-27 22:08:40.683042 | orchestrator | Saturday 27 September 2025 22:06:22 +0000 (0:00:02.072) 0:03:58.305 **** 2025-09-27 22:08:40.683051 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.683062 | orchestrator | 2025-09-27 22:08:40.683072 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-27 22:08:40.683089 | orchestrator | Saturday 27 September 2025 22:06:23 +0000 (0:00:01.489) 0:03:59.794 **** 2025-09-27 22:08:40.683100 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-27 22:08:40.683110 | orchestrator | 2025-09-27 22:08:40.683120 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-27 22:08:40.683130 | orchestrator | Saturday 27 September 2025 22:06:24 +0000 (0:00:00.836) 0:04:00.630 **** 2025-09-27 22:08:40.683141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-27 22:08:40.683149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-27 22:08:40.683156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-27 22:08:40.683163 | orchestrator | 2025-09-27 22:08:40.683169 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-27 22:08:40.683176 | orchestrator | Saturday 27 September 2025 22:06:28 +0000 (0:00:04.093) 0:04:04.723 **** 2025-09-27 22:08:40.683212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-27 22:08:40.683221 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.683228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-27 22:08:40.683235 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.683241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-27 22:08:40.683254 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.683260 | orchestrator | 2025-09-27 22:08:40.683266 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-27 22:08:40.683272 | orchestrator | Saturday 27 September 2025 22:06:30 +0000 (0:00:01.407) 0:04:06.131 **** 2025-09-27 22:08:40.683279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-27 22:08:40.683286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-27 22:08:40.683293 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.683300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-27 22:08:40.683306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-27 22:08:40.683313 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.683319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-27 22:08:40.683326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-27 22:08:40.683332 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.683338 | orchestrator | 2025-09-27 22:08:40.683345 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-27 22:08:40.683351 | orchestrator | Saturday 27 September 2025 22:06:31 +0000 (0:00:01.487) 0:04:07.618 **** 2025-09-27 22:08:40.683358 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.683364 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.683371 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.683377 | orchestrator | 2025-09-27 22:08:40.683383 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-27 22:08:40.683389 | orchestrator | Saturday 27 September 2025 22:06:34 +0000 (0:00:02.520) 0:04:10.138 **** 2025-09-27 22:08:40.683395 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.683401 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.683407 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.683414 | orchestrator | 2025-09-27 22:08:40.683420 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-27 22:08:40.683426 | orchestrator | Saturday 27 September 2025 22:06:37 +0000 (0:00:03.001) 0:04:13.140 **** 2025-09-27 22:08:40.683436 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-27 22:08:40.683442 | orchestrator | 2025-09-27 22:08:40.683449 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-27 22:08:40.683458 | orchestrator | Saturday 27 September 2025 22:06:38 +0000 (0:00:01.410) 0:04:14.550 **** 2025-09-27 22:08:40.683465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-27 22:08:40.683479 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.683485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-27 22:08:40.683492 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.683498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-27 22:08:40.683505 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.683511 | orchestrator | 2025-09-27 22:08:40.683517 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-27 22:08:40.683524 | orchestrator | Saturday 27 September 2025 22:06:39 +0000 (0:00:01.278) 0:04:15.829 **** 2025-09-27 22:08:40.683530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-27 22:08:40.683537 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.683543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-27 22:08:40.683550 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.683556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-27 22:08:40.683563 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.683569 | orchestrator | 2025-09-27 22:08:40.683575 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-27 22:08:40.683581 | orchestrator | Saturday 27 September 2025 22:06:41 +0000 (0:00:01.309) 0:04:17.138 **** 2025-09-27 22:08:40.683587 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.683599 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.683605 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.683611 | orchestrator | 2025-09-27 22:08:40.683621 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-27 22:08:40.683631 | orchestrator | Saturday 27 September 2025 22:06:42 +0000 (0:00:01.757) 0:04:18.896 **** 2025-09-27 22:08:40.683637 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:08:40.683644 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:08:40.683650 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:08:40.683657 | orchestrator | 2025-09-27 22:08:40.683663 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-27 22:08:40.683669 | orchestrator | Saturday 27 September 2025 22:06:45 +0000 (0:00:02.319) 0:04:21.216 **** 2025-09-27 22:08:40.683675 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:08:40.683682 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:08:40.683688 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:08:40.683694 | orchestrator | 2025-09-27 22:08:40.683700 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-27 22:08:40.683707 | orchestrator | Saturday 27 September 2025 22:06:48 +0000 (0:00:02.987) 0:04:24.203 **** 2025-09-27 22:08:40.683713 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-27 22:08:40.683719 | orchestrator | 2025-09-27 22:08:40.683725 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-27 22:08:40.683732 | orchestrator | Saturday 27 September 2025 22:06:49 +0000 (0:00:00.857) 0:04:25.061 **** 2025-09-27 22:08:40.683738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-27 22:08:40.683745 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.683751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-27 22:08:40.683757 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.683764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-27 22:08:40.683770 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.683777 | orchestrator | 2025-09-27 22:08:40.683783 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-27 22:08:40.683789 | orchestrator | Saturday 27 September 2025 22:06:50 +0000 (0:00:01.326) 0:04:26.387 **** 2025-09-27 22:08:40.683796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-27 22:08:40.683807 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.683813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-27 22:08:40.683820 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.683834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-27 22:08:40.683840 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.683847 | orchestrator | 2025-09-27 22:08:40.683853 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-27 22:08:40.683859 | orchestrator | Saturday 27 September 2025 22:06:51 +0000 (0:00:01.379) 0:04:27.767 **** 2025-09-27 22:08:40.683865 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.683871 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.683878 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.683884 | orchestrator | 2025-09-27 22:08:40.683890 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-27 22:08:40.683896 | orchestrator | Saturday 27 September 2025 22:06:53 +0000 (0:00:01.583) 0:04:29.350 **** 2025-09-27 22:08:40.683902 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:08:40.683909 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:08:40.683915 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:08:40.683921 | orchestrator | 2025-09-27 22:08:40.683927 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-27 22:08:40.683933 | orchestrator | Saturday 27 September 2025 22:06:55 +0000 (0:00:02.282) 0:04:31.632 **** 2025-09-27 22:08:40.683940 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:08:40.683946 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:08:40.683952 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:08:40.683958 | orchestrator | 2025-09-27 22:08:40.683964 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-27 22:08:40.683971 | orchestrator | Saturday 27 September 2025 22:06:58 +0000 (0:00:02.890) 0:04:34.523 **** 2025-09-27 22:08:40.683977 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.683983 | orchestrator | 2025-09-27 22:08:40.683989 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-27 22:08:40.683996 | orchestrator | Saturday 27 September 2025 22:07:00 +0000 (0:00:01.524) 0:04:36.048 **** 2025-09-27 22:08:40.684002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-27 22:08:40.684013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-27 22:08:40.684020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-27 22:08:40.684034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-27 22:08:40.684041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.684048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-27 22:08:40.684054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-27 22:08:40.684066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-27 22:08:40.684072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-27 22:08:40.684083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.684095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-27 22:08:40.684102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-27 22:08:40.684108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-27 22:08:40.684119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-27 22:08:40.684125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.684132 | orchestrator | 2025-09-27 22:08:40.684138 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-27 22:08:40.684144 | orchestrator | Saturday 27 September 2025 22:07:03 +0000 (0:00:03.395) 0:04:39.444 **** 2025-09-27 22:08:40.684158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-27 22:08:40.684165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-27 22:08:40.684172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-27 22:08:40.684178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-27 22:08:40.684202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.684209 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.684216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-27 22:08:40.684227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-27 22:08:40.684236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-27 22:08:40.684243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-27 22:08:40.684250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.684261 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.684267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-27 22:08:40.684274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-27 22:08:40.684280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-27 22:08:40.684294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-27 22:08:40.684302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:08:40.684308 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.684314 | orchestrator | 2025-09-27 22:08:40.684321 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-27 22:08:40.684331 | orchestrator | Saturday 27 September 2025 22:07:04 +0000 (0:00:00.702) 0:04:40.146 **** 2025-09-27 22:08:40.684338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-27 22:08:40.684344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-27 22:08:40.684351 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.684357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-27 22:08:40.684363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-27 22:08:40.684370 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.684376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-27 22:08:40.684382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-27 22:08:40.684388 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.684395 | orchestrator | 2025-09-27 22:08:40.684401 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-27 22:08:40.684407 | orchestrator | Saturday 27 September 2025 22:07:05 +0000 (0:00:01.476) 0:04:41.623 **** 2025-09-27 22:08:40.684414 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.684420 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.684426 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.684432 | orchestrator | 2025-09-27 22:08:40.684439 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-27 22:08:40.684445 | orchestrator | Saturday 27 September 2025 22:07:07 +0000 (0:00:01.383) 0:04:43.007 **** 2025-09-27 22:08:40.684451 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.684457 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.684463 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.684469 | orchestrator | 2025-09-27 22:08:40.684476 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-27 22:08:40.684482 | orchestrator | Saturday 27 September 2025 22:07:09 +0000 (0:00:02.115) 0:04:45.122 **** 2025-09-27 22:08:40.684488 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.684494 | orchestrator | 2025-09-27 22:08:40.684500 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-27 22:08:40.684506 | orchestrator | Saturday 27 September 2025 22:07:10 +0000 (0:00:01.371) 0:04:46.494 **** 2025-09-27 22:08:40.684522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 22:08:40.684534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 22:08:40.684540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 22:08:40.684548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 22:08:40.684632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 22:08:40.684645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 22:08:40.684656 | orchestrator | 2025-09-27 22:08:40.684663 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-27 22:08:40.684669 | orchestrator | Saturday 27 September 2025 22:07:15 +0000 (0:00:05.117) 0:04:51.611 **** 2025-09-27 22:08:40.684676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-27 22:08:40.684683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-27 22:08:40.684689 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.684696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-27 22:08:40.684730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-27 22:08:40.684739 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.684746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-27 22:08:40.684753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-27 22:08:40.684760 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.684766 | orchestrator | 2025-09-27 22:08:40.684773 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-27 22:08:40.684779 | orchestrator | Saturday 27 September 2025 22:07:16 +0000 (0:00:00.677) 0:04:52.288 **** 2025-09-27 22:08:40.684785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-27 22:08:40.684792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-27 22:08:40.684799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-27 22:08:40.684810 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.684817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-27 22:08:40.684846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-27 22:08:40.684855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-27 22:08:40.684861 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.684867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-27 22:08:40.684874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-27 22:08:40.684880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-27 22:08:40.684886 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.684893 | orchestrator | 2025-09-27 22:08:40.684899 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-27 22:08:40.684905 | orchestrator | Saturday 27 September 2025 22:07:17 +0000 (0:00:00.911) 0:04:53.200 **** 2025-09-27 22:08:40.684912 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.684918 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.684924 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.684930 | orchestrator | 2025-09-27 22:08:40.684936 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-27 22:08:40.684942 | orchestrator | Saturday 27 September 2025 22:07:18 +0000 (0:00:00.843) 0:04:54.043 **** 2025-09-27 22:08:40.684949 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.684955 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.684962 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.684968 | orchestrator | 2025-09-27 22:08:40.684974 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-27 22:08:40.684981 | orchestrator | Saturday 27 September 2025 22:07:19 +0000 (0:00:01.343) 0:04:55.387 **** 2025-09-27 22:08:40.684987 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.684994 | orchestrator | 2025-09-27 22:08:40.685000 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-27 22:08:40.685007 | orchestrator | Saturday 27 September 2025 22:07:20 +0000 (0:00:01.410) 0:04:56.798 **** 2025-09-27 22:08:40.685014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-27 22:08:40.685025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 22:08:40.685031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 22:08:40.685075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-27 22:08:40.685082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 22:08:40.685088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 22:08:40.685135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-27 22:08:40.685143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 22:08:40.685150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 22:08:40.685175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-27 22:08:40.685204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-27 22:08:40.685212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 22:08:40.685232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-27 22:08:40.685243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-27 22:08:40.685253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 22:08:40.685276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-27 22:08:40.685283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-27 22:08:40.685293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 22:08:40.685320 | orchestrator | 2025-09-27 22:08:40.685327 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-27 22:08:40.685333 | orchestrator | Saturday 27 September 2025 22:07:25 +0000 (0:00:04.727) 0:05:01.526 **** 2025-09-27 22:08:40.685340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-27 22:08:40.685347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 22:08:40.685353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 22:08:40.685380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-27 22:08:40.685391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-27 22:08:40.685398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-27 22:08:40.685421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 22:08:40.685427 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.685434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 22:08:40.685446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 22:08:40.685466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-27 22:08:40.685479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-27 22:08:40.685486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-27 22:08:40.685498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 22:08:40.685512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 22:08:40.685529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685535 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.685541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 22:08:40.685558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-27 22:08:40.685604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-27 22:08:40.685622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:08:40.685635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 22:08:40.685641 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.685648 | orchestrator | 2025-09-27 22:08:40.685654 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-27 22:08:40.685661 | orchestrator | Saturday 27 September 2025 22:07:26 +0000 (0:00:01.198) 0:05:02.724 **** 2025-09-27 22:08:40.685667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-27 22:08:40.685674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-27 22:08:40.685680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-27 22:08:40.685687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-27 22:08:40.685693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-27 22:08:40.685700 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.685714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-27 22:08:40.685721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-27 22:08:40.685727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-27 22:08:40.685741 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.685747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-27 22:08:40.685753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-27 22:08:40.685760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-27 22:08:40.685766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-27 22:08:40.685772 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.685782 | orchestrator | 2025-09-27 22:08:40.685788 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-27 22:08:40.685795 | orchestrator | Saturday 27 September 2025 22:07:27 +0000 (0:00:00.984) 0:05:03.709 **** 2025-09-27 22:08:40.685801 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.685807 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.685813 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.685819 | orchestrator | 2025-09-27 22:08:40.685825 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-27 22:08:40.685831 | orchestrator | Saturday 27 September 2025 22:07:28 +0000 (0:00:00.436) 0:05:04.146 **** 2025-09-27 22:08:40.685837 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.685844 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.685850 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.685856 | orchestrator | 2025-09-27 22:08:40.685862 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-27 22:08:40.685868 | orchestrator | Saturday 27 September 2025 22:07:29 +0000 (0:00:01.410) 0:05:05.556 **** 2025-09-27 22:08:40.685874 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.685880 | orchestrator | 2025-09-27 22:08:40.685886 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-27 22:08:40.685893 | orchestrator | Saturday 27 September 2025 22:07:31 +0000 (0:00:01.687) 0:05:07.243 **** 2025-09-27 22:08:40.685899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 22:08:40.685913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 22:08:40.685926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 22:08:40.685933 | orchestrator | 2025-09-27 22:08:40.685939 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-27 22:08:40.685945 | orchestrator | Saturday 27 September 2025 22:07:33 +0000 (0:00:02.555) 0:05:09.799 **** 2025-09-27 22:08:40.685952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-27 22:08:40.685959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-27 22:08:40.685971 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.685977 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.685990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-27 22:08:40.685997 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.686003 | orchestrator | 2025-09-27 22:08:40.686010 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-27 22:08:40.686035 | orchestrator | Saturday 27 September 2025 22:07:34 +0000 (0:00:00.398) 0:05:10.198 **** 2025-09-27 22:08:40.686042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-27 22:08:40.686049 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.686056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-27 22:08:40.686062 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.686068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-27 22:08:40.686075 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.686081 | orchestrator | 2025-09-27 22:08:40.686087 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-27 22:08:40.686093 | orchestrator | Saturday 27 September 2025 22:07:35 +0000 (0:00:00.958) 0:05:11.156 **** 2025-09-27 22:08:40.686099 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.686106 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.686112 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.686118 | orchestrator | 2025-09-27 22:08:40.686124 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-27 22:08:40.686130 | orchestrator | Saturday 27 September 2025 22:07:35 +0000 (0:00:00.462) 0:05:11.619 **** 2025-09-27 22:08:40.686136 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.686142 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.686149 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.686154 | orchestrator | 2025-09-27 22:08:40.686161 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-27 22:08:40.686167 | orchestrator | Saturday 27 September 2025 22:07:37 +0000 (0:00:01.326) 0:05:12.945 **** 2025-09-27 22:08:40.686173 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:08:40.686179 | orchestrator | 2025-09-27 22:08:40.686185 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-27 22:08:40.686231 | orchestrator | Saturday 27 September 2025 22:07:38 +0000 (0:00:01.755) 0:05:14.701 **** 2025-09-27 22:08:40.686238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-27 22:08:40.686260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-27 22:08:40.686267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-27 22:08:40.686274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-27 22:08:40.686282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-27 22:08:40.686295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-27 22:08:40.686301 | orchestrator | 2025-09-27 22:08:40.686311 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-27 22:08:40.686317 | orchestrator | Saturday 27 September 2025 22:07:44 +0000 (0:00:06.220) 0:05:20.922 **** 2025-09-27 22:08:40.686327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-27 22:08:40.686334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-27 22:08:40.686341 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.686347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-27 22:08:40.686358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-27 22:08:40.686365 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.686378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-27 22:08:40.686385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-27 22:08:40.686392 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.686398 | orchestrator | 2025-09-27 22:08:40.686404 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-27 22:08:40.686410 | orchestrator | Saturday 27 September 2025 22:07:45 +0000 (0:00:00.666) 0:05:21.589 **** 2025-09-27 22:08:40.686417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-27 22:08:40.686423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-27 22:08:40.686436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-27 22:08:40.686443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-27 22:08:40.686449 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.686456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-27 22:08:40.686462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-27 22:08:40.686469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-27 22:08:40.686475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-27 22:08:40.686481 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.686488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-27 22:08:40.686498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-27 22:08:40.686507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-27 22:08:40.686514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-27 22:08:40.686521 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.686527 | orchestrator | 2025-09-27 22:08:40.686534 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-27 22:08:40.686540 | orchestrator | Saturday 27 September 2025 22:07:47 +0000 (0:00:01.661) 0:05:23.251 **** 2025-09-27 22:08:40.686546 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.686552 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.686558 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.686564 | orchestrator | 2025-09-27 22:08:40.686570 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-27 22:08:40.686577 | orchestrator | Saturday 27 September 2025 22:07:48 +0000 (0:00:01.379) 0:05:24.631 **** 2025-09-27 22:08:40.686583 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.686589 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.686595 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.686601 | orchestrator | 2025-09-27 22:08:40.686608 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-27 22:08:40.686614 | orchestrator | Saturday 27 September 2025 22:07:50 +0000 (0:00:02.200) 0:05:26.831 **** 2025-09-27 22:08:40.686624 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.686631 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.686637 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.686643 | orchestrator | 2025-09-27 22:08:40.686649 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-27 22:08:40.686655 | orchestrator | Saturday 27 September 2025 22:07:51 +0000 (0:00:00.349) 0:05:27.181 **** 2025-09-27 22:08:40.686661 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.686667 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.686674 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.686680 | orchestrator | 2025-09-27 22:08:40.686686 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-27 22:08:40.686692 | orchestrator | Saturday 27 September 2025 22:07:51 +0000 (0:00:00.329) 0:05:27.510 **** 2025-09-27 22:08:40.686698 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.686705 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.686711 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.686717 | orchestrator | 2025-09-27 22:08:40.686723 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-27 22:08:40.686729 | orchestrator | Saturday 27 September 2025 22:07:52 +0000 (0:00:00.649) 0:05:28.160 **** 2025-09-27 22:08:40.686736 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.686742 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.686749 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.686755 | orchestrator | 2025-09-27 22:08:40.686761 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-27 22:08:40.686768 | orchestrator | Saturday 27 September 2025 22:07:52 +0000 (0:00:00.323) 0:05:28.483 **** 2025-09-27 22:08:40.686774 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.686780 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.686786 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.686792 | orchestrator | 2025-09-27 22:08:40.686797 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-27 22:08:40.686803 | orchestrator | Saturday 27 September 2025 22:07:52 +0000 (0:00:00.323) 0:05:28.807 **** 2025-09-27 22:08:40.686808 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.686813 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.686819 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.686824 | orchestrator | 2025-09-27 22:08:40.686829 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-27 22:08:40.686835 | orchestrator | Saturday 27 September 2025 22:07:53 +0000 (0:00:00.850) 0:05:29.658 **** 2025-09-27 22:08:40.686840 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:08:40.686846 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:08:40.686851 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:08:40.686857 | orchestrator | 2025-09-27 22:08:40.686862 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-27 22:08:40.686868 | orchestrator | Saturday 27 September 2025 22:07:54 +0000 (0:00:00.698) 0:05:30.357 **** 2025-09-27 22:08:40.686873 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:08:40.686878 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:08:40.686884 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:08:40.686889 | orchestrator | 2025-09-27 22:08:40.686895 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-27 22:08:40.686900 | orchestrator | Saturday 27 September 2025 22:07:54 +0000 (0:00:00.480) 0:05:30.837 **** 2025-09-27 22:08:40.686905 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:08:40.686911 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:08:40.686917 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:08:40.686922 | orchestrator | 2025-09-27 22:08:40.686927 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-27 22:08:40.686933 | orchestrator | Saturday 27 September 2025 22:07:55 +0000 (0:00:00.915) 0:05:31.753 **** 2025-09-27 22:08:40.686938 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:08:40.686944 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:08:40.686954 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:08:40.686959 | orchestrator | 2025-09-27 22:08:40.686965 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-27 22:08:40.686970 | orchestrator | Saturday 27 September 2025 22:07:57 +0000 (0:00:01.290) 0:05:33.043 **** 2025-09-27 22:08:40.686975 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:08:40.686981 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:08:40.686989 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:08:40.686995 | orchestrator | 2025-09-27 22:08:40.687001 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-27 22:08:40.687010 | orchestrator | Saturday 27 September 2025 22:07:58 +0000 (0:00:00.914) 0:05:33.958 **** 2025-09-27 22:08:40.687016 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.687021 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.687027 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.687032 | orchestrator | 2025-09-27 22:08:40.687038 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-27 22:08:40.687043 | orchestrator | Saturday 27 September 2025 22:08:08 +0000 (0:00:10.198) 0:05:44.157 **** 2025-09-27 22:08:40.687049 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:08:40.687054 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:08:40.687059 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:08:40.687065 | orchestrator | 2025-09-27 22:08:40.687070 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-27 22:08:40.687076 | orchestrator | Saturday 27 September 2025 22:08:08 +0000 (0:00:00.743) 0:05:44.900 **** 2025-09-27 22:08:40.687081 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.687086 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.687092 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.687097 | orchestrator | 2025-09-27 22:08:40.687103 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-27 22:08:40.687108 | orchestrator | Saturday 27 September 2025 22:08:22 +0000 (0:00:13.682) 0:05:58.583 **** 2025-09-27 22:08:40.687114 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:08:40.687119 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:08:40.687124 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:08:40.687130 | orchestrator | 2025-09-27 22:08:40.687136 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-27 22:08:40.687141 | orchestrator | Saturday 27 September 2025 22:08:23 +0000 (0:00:01.135) 0:05:59.719 **** 2025-09-27 22:08:40.687147 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:08:40.687152 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:08:40.687158 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:08:40.687163 | orchestrator | 2025-09-27 22:08:40.687169 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-27 22:08:40.687174 | orchestrator | Saturday 27 September 2025 22:08:33 +0000 (0:00:09.496) 0:06:09.215 **** 2025-09-27 22:08:40.687180 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.687185 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.687205 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.687210 | orchestrator | 2025-09-27 22:08:40.687216 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-27 22:08:40.687221 | orchestrator | Saturday 27 September 2025 22:08:33 +0000 (0:00:00.353) 0:06:09.569 **** 2025-09-27 22:08:40.687227 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.687232 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.687238 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.687243 | orchestrator | 2025-09-27 22:08:40.687248 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-27 22:08:40.687254 | orchestrator | Saturday 27 September 2025 22:08:33 +0000 (0:00:00.352) 0:06:09.921 **** 2025-09-27 22:08:40.687260 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.687265 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.687270 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.687276 | orchestrator | 2025-09-27 22:08:40.687286 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-27 22:08:40.687292 | orchestrator | Saturday 27 September 2025 22:08:34 +0000 (0:00:00.679) 0:06:10.600 **** 2025-09-27 22:08:40.687297 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.687303 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.687308 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.687313 | orchestrator | 2025-09-27 22:08:40.687319 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-27 22:08:40.687324 | orchestrator | Saturday 27 September 2025 22:08:35 +0000 (0:00:00.336) 0:06:10.937 **** 2025-09-27 22:08:40.687330 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.687335 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.687340 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.687346 | orchestrator | 2025-09-27 22:08:40.687351 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-27 22:08:40.687356 | orchestrator | Saturday 27 September 2025 22:08:35 +0000 (0:00:00.407) 0:06:11.344 **** 2025-09-27 22:08:40.687362 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:08:40.687367 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:08:40.687373 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:08:40.687378 | orchestrator | 2025-09-27 22:08:40.687383 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-27 22:08:40.687389 | orchestrator | Saturday 27 September 2025 22:08:35 +0000 (0:00:00.344) 0:06:11.689 **** 2025-09-27 22:08:40.687394 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:08:40.687400 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:08:40.687405 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:08:40.687410 | orchestrator | 2025-09-27 22:08:40.687416 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-27 22:08:40.687421 | orchestrator | Saturday 27 September 2025 22:08:37 +0000 (0:00:01.343) 0:06:13.032 **** 2025-09-27 22:08:40.687427 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:08:40.687432 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:08:40.687438 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:08:40.687443 | orchestrator | 2025-09-27 22:08:40.687449 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:08:40.687454 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-27 22:08:40.687461 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-27 22:08:40.687466 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-27 22:08:40.687471 | orchestrator | 2025-09-27 22:08:40.687477 | orchestrator | 2025-09-27 22:08:40.687485 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:08:40.687494 | orchestrator | Saturday 27 September 2025 22:08:37 +0000 (0:00:00.885) 0:06:13.918 **** 2025-09-27 22:08:40.687500 | orchestrator | =============================================================================== 2025-09-27 22:08:40.687505 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.68s 2025-09-27 22:08:40.687511 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.20s 2025-09-27 22:08:40.687516 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.50s 2025-09-27 22:08:40.687521 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.22s 2025-09-27 22:08:40.687527 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.77s 2025-09-27 22:08:40.687532 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.12s 2025-09-27 22:08:40.687537 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.97s 2025-09-27 22:08:40.687543 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.73s 2025-09-27 22:08:40.687552 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.57s 2025-09-27 22:08:40.687557 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.41s 2025-09-27 22:08:40.687563 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.17s 2025-09-27 22:08:40.687568 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.17s 2025-09-27 22:08:40.687574 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.09s 2025-09-27 22:08:40.687579 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.93s 2025-09-27 22:08:40.687585 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.89s 2025-09-27 22:08:40.687590 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 3.69s 2025-09-27 22:08:40.687596 | orchestrator | loadbalancer : Check loadbalancer containers ---------------------------- 3.68s 2025-09-27 22:08:40.687602 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.66s 2025-09-27 22:08:40.687607 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.59s 2025-09-27 22:08:40.687613 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.46s 2025-09-27 22:08:40.687618 | orchestrator | 2025-09-27 22:08:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:08:43.718404 | orchestrator | 2025-09-27 22:08:43 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:08:43.720360 | orchestrator | 2025-09-27 22:08:43 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:08:43.722832 | orchestrator | 2025-09-27 22:08:43 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:08:43.723407 | orchestrator | 2025-09-27 22:08:43 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:08:46.768038 | orchestrator | 2025-09-27 22:08:46 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:08:46.768592 | orchestrator | 2025-09-27 22:08:46 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:08:46.769509 | orchestrator | 2025-09-27 22:08:46 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:08:46.769548 | orchestrator | 2025-09-27 22:08:46 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:08:49.796836 | orchestrator | 2025-09-27 22:08:49 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:08:49.797168 | orchestrator | 2025-09-27 22:08:49 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:08:49.799245 | orchestrator | 2025-09-27 22:08:49 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:08:49.799270 | orchestrator | 2025-09-27 22:08:49 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:08:52.829670 | orchestrator | 2025-09-27 22:08:52 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:08:52.831002 | orchestrator | 2025-09-27 22:08:52 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:08:52.832363 | orchestrator | 2025-09-27 22:08:52 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:08:52.832402 | orchestrator | 2025-09-27 22:08:52 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:08:55.905894 | orchestrator | 2025-09-27 22:08:55 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:08:55.907771 | orchestrator | 2025-09-27 22:08:55 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:08:55.910478 | orchestrator | 2025-09-27 22:08:55 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:08:55.910542 | orchestrator | 2025-09-27 22:08:55 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:08:58.943167 | orchestrator | 2025-09-27 22:08:58 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:08:58.947665 | orchestrator | 2025-09-27 22:08:58 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:08:58.947825 | orchestrator | 2025-09-27 22:08:58 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:08:58.947839 | orchestrator | 2025-09-27 22:08:58 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:09:01.985984 | orchestrator | 2025-09-27 22:09:01 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:09:01.987830 | orchestrator | 2025-09-27 22:09:01 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:09:01.988518 | orchestrator | 2025-09-27 22:09:01 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:09:01.988847 | orchestrator | 2025-09-27 22:09:01 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:09:05.088665 | orchestrator | 2025-09-27 22:09:05 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:09:05.088810 | orchestrator | 2025-09-27 22:09:05 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:09:05.089404 | orchestrator | 2025-09-27 22:09:05 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:09:05.089461 | orchestrator | 2025-09-27 22:09:05 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:09:08.116113 | orchestrator | 2025-09-27 22:09:08 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:09:08.118331 | orchestrator | 2025-09-27 22:09:08 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:09:08.120235 | orchestrator | 2025-09-27 22:09:08 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:09:08.120423 | orchestrator | 2025-09-27 22:09:08 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:09:11.157943 | orchestrator | 2025-09-27 22:09:11 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:09:11.159374 | orchestrator | 2025-09-27 22:09:11 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:09:11.165612 | orchestrator | 2025-09-27 22:09:11 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:09:11.165677 | orchestrator | 2025-09-27 22:09:11 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:09:14.212032 | orchestrator | 2025-09-27 22:09:14 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:09:14.215058 | orchestrator | 2025-09-27 22:09:14 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:09:14.217477 | orchestrator | 2025-09-27 22:09:14 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:09:14.218465 | orchestrator | 2025-09-27 22:09:14 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:09:17.266106 | orchestrator | 2025-09-27 22:09:17 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:09:17.267191 | orchestrator | 2025-09-27 22:09:17 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:09:17.267865 | orchestrator | 2025-09-27 22:09:17 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:09:17.268271 | orchestrator | 2025-09-27 22:09:17 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:09:20.319178 | orchestrator | 2025-09-27 22:09:20 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:09:20.321269 | orchestrator | 2025-09-27 22:09:20 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:09:20.323492 | orchestrator | 2025-09-27 22:09:20 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:09:20.323551 | orchestrator | 2025-09-27 22:09:20 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:09:23.368354 | orchestrator | 2025-09-27 22:09:23 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:09:23.368964 | orchestrator | 2025-09-27 22:09:23 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:09:23.370368 | orchestrator | 2025-09-27 22:09:23 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:09:23.370410 | orchestrator | 2025-09-27 22:09:23 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:09:26.411025 | orchestrator | 2025-09-27 22:09:26 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:09:26.412650 | orchestrator | 2025-09-27 22:09:26 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:09:26.414228 | orchestrator | 2025-09-27 22:09:26 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:09:26.414327 | orchestrator | 2025-09-27 22:09:26 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:09:29.458843 | orchestrator | 2025-09-27 22:09:29 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:09:29.458977 | orchestrator | 2025-09-27 22:09:29 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:09:29.463631 | orchestrator | 2025-09-27 22:09:29 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:09:29.463692 | orchestrator | 2025-09-27 22:09:29 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:09:32.522789 | orchestrator | 2025-09-27 22:09:32 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:09:32.524644 | orchestrator | 2025-09-27 22:09:32 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:09:32.526902 | orchestrator | 2025-09-27 22:09:32 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:09:32.526982 | orchestrator | 2025-09-27 22:09:32 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:09:35.579636 | orchestrator | 2025-09-27 22:09:35 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:09:35.581582 | orchestrator | 2025-09-27 22:09:35 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:09:35.583509 | orchestrator | 2025-09-27 22:09:35 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:09:35.583575 | orchestrator | 2025-09-27 22:09:35 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:09:38.623075 | orchestrator | 2025-09-27 22:09:38 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:09:38.624293 | orchestrator | 2025-09-27 22:09:38 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:09:38.625576 | orchestrator | 2025-09-27 22:09:38 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:09:38.625619 | orchestrator | 2025-09-27 22:09:38 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:09:41.669799 | orchestrator | 2025-09-27 22:09:41 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:09:41.670744 | orchestrator | 2025-09-27 22:09:41 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:09:41.672453 | orchestrator | 2025-09-27 22:09:41 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:09:41.672485 | orchestrator | 2025-09-27 22:09:41 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:09:44.721902 | orchestrator | 2025-09-27 22:09:44 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:09:44.724620 | orchestrator | 2025-09-27 22:09:44 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:09:44.727445 | orchestrator | 2025-09-27 22:09:44 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:09:44.727486 | orchestrator | 2025-09-27 22:09:44 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:09:47.775806 | orchestrator | 2025-09-27 22:09:47 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:09:47.777214 | orchestrator | 2025-09-27 22:09:47 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:09:47.779233 | orchestrator | 2025-09-27 22:09:47 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:09:47.779288 | orchestrator | 2025-09-27 22:09:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:09:50.825648 | orchestrator | 2025-09-27 22:09:50 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:09:50.826674 | orchestrator | 2025-09-27 22:09:50 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:09:50.828536 | orchestrator | 2025-09-27 22:09:50 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:09:50.828733 | orchestrator | 2025-09-27 22:09:50 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:09:53.874274 | orchestrator | 2025-09-27 22:09:53 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:09:53.875810 | orchestrator | 2025-09-27 22:09:53 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:09:53.877773 | orchestrator | 2025-09-27 22:09:53 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:09:53.877970 | orchestrator | 2025-09-27 22:09:53 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:09:56.913228 | orchestrator | 2025-09-27 22:09:56 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:09:56.914210 | orchestrator | 2025-09-27 22:09:56 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:09:56.916030 | orchestrator | 2025-09-27 22:09:56 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:09:56.916347 | orchestrator | 2025-09-27 22:09:56 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:09:59.957487 | orchestrator | 2025-09-27 22:09:59 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:09:59.959226 | orchestrator | 2025-09-27 22:09:59 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:09:59.961959 | orchestrator | 2025-09-27 22:09:59 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:09:59.962251 | orchestrator | 2025-09-27 22:09:59 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:10:03.009604 | orchestrator | 2025-09-27 22:10:03 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:10:03.012989 | orchestrator | 2025-09-27 22:10:03 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:10:03.015089 | orchestrator | 2025-09-27 22:10:03 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:10:03.015610 | orchestrator | 2025-09-27 22:10:03 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:10:06.054528 | orchestrator | 2025-09-27 22:10:06 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:10:06.056572 | orchestrator | 2025-09-27 22:10:06 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:10:06.057821 | orchestrator | 2025-09-27 22:10:06 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:10:06.058375 | orchestrator | 2025-09-27 22:10:06 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:10:09.112641 | orchestrator | 2025-09-27 22:10:09 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:10:09.114829 | orchestrator | 2025-09-27 22:10:09 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:10:09.118821 | orchestrator | 2025-09-27 22:10:09 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:10:09.118884 | orchestrator | 2025-09-27 22:10:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:10:12.162715 | orchestrator | 2025-09-27 22:10:12 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:10:12.168483 | orchestrator | 2025-09-27 22:10:12 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:10:12.172689 | orchestrator | 2025-09-27 22:10:12 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:10:12.172791 | orchestrator | 2025-09-27 22:10:12 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:10:15.229521 | orchestrator | 2025-09-27 22:10:15 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:10:15.230160 | orchestrator | 2025-09-27 22:10:15 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:10:15.231669 | orchestrator | 2025-09-27 22:10:15 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:10:15.231884 | orchestrator | 2025-09-27 22:10:15 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:10:18.277627 | orchestrator | 2025-09-27 22:10:18 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:10:18.278352 | orchestrator | 2025-09-27 22:10:18 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:10:18.280153 | orchestrator | 2025-09-27 22:10:18 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:10:18.280201 | orchestrator | 2025-09-27 22:10:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:10:21.333375 | orchestrator | 2025-09-27 22:10:21 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:10:21.334641 | orchestrator | 2025-09-27 22:10:21 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:10:21.337231 | orchestrator | 2025-09-27 22:10:21 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:10:21.337313 | orchestrator | 2025-09-27 22:10:21 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:10:24.383919 | orchestrator | 2025-09-27 22:10:24 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:10:24.389665 | orchestrator | 2025-09-27 22:10:24 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:10:24.392802 | orchestrator | 2025-09-27 22:10:24 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state STARTED 2025-09-27 22:10:24.392893 | orchestrator | 2025-09-27 22:10:24 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:10:27.444734 | orchestrator | 2025-09-27 22:10:27 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:10:27.448017 | orchestrator | 2025-09-27 22:10:27 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:10:27.457055 | orchestrator | 2025-09-27 22:10:27 | INFO  | Task 157b91f7-f555-4fbd-9bf6-2ac48e7d0d4d is in state SUCCESS 2025-09-27 22:10:27.460389 | orchestrator | 2025-09-27 22:10:27.460439 | orchestrator | 2025-09-27 22:10:27.460448 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-27 22:10:27.460455 | orchestrator | 2025-09-27 22:10:27.460461 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-27 22:10:27.460467 | orchestrator | Saturday 27 September 2025 22:00:07 +0000 (0:00:00.734) 0:00:00.734 **** 2025-09-27 22:10:27.460475 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.460482 | orchestrator | 2025-09-27 22:10:27.460488 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-27 22:10:27.460494 | orchestrator | Saturday 27 September 2025 22:00:08 +0000 (0:00:01.095) 0:00:01.830 **** 2025-09-27 22:10:27.460500 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.460507 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.460513 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.460519 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.460524 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.460530 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.460536 | orchestrator | 2025-09-27 22:10:27.460541 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-27 22:10:27.460547 | orchestrator | Saturday 27 September 2025 22:00:10 +0000 (0:00:01.484) 0:00:03.314 **** 2025-09-27 22:10:27.460553 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.460559 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.460565 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.460571 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.460576 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.460582 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.460588 | orchestrator | 2025-09-27 22:10:27.460594 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-27 22:10:27.460599 | orchestrator | Saturday 27 September 2025 22:00:11 +0000 (0:00:00.699) 0:00:04.013 **** 2025-09-27 22:10:27.460605 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.460611 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.460617 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.460622 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.460628 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.460633 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.460639 | orchestrator | 2025-09-27 22:10:27.460645 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-27 22:10:27.460650 | orchestrator | Saturday 27 September 2025 22:00:11 +0000 (0:00:00.972) 0:00:04.985 **** 2025-09-27 22:10:27.460656 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.460662 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.460667 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.460673 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.460679 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.460684 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.460690 | orchestrator | 2025-09-27 22:10:27.460759 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-27 22:10:27.460789 | orchestrator | Saturday 27 September 2025 22:00:12 +0000 (0:00:00.686) 0:00:05.672 **** 2025-09-27 22:10:27.460795 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.460801 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.460807 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.460812 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.460818 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.460823 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.460829 | orchestrator | 2025-09-27 22:10:27.460835 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-27 22:10:27.460841 | orchestrator | Saturday 27 September 2025 22:00:13 +0000 (0:00:00.551) 0:00:06.224 **** 2025-09-27 22:10:27.460847 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.460853 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.460858 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.460864 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.460870 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.460927 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.460933 | orchestrator | 2025-09-27 22:10:27.460941 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-27 22:10:27.460948 | orchestrator | Saturday 27 September 2025 22:00:14 +0000 (0:00:00.920) 0:00:07.145 **** 2025-09-27 22:10:27.460954 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.460962 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.460968 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.460986 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.460993 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.460999 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.461006 | orchestrator | 2025-09-27 22:10:27.461012 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-27 22:10:27.461019 | orchestrator | Saturday 27 September 2025 22:00:14 +0000 (0:00:00.743) 0:00:07.888 **** 2025-09-27 22:10:27.461025 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.461032 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.461038 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.461044 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.461050 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.461057 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.461063 | orchestrator | 2025-09-27 22:10:27.461070 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-27 22:10:27.461076 | orchestrator | Saturday 27 September 2025 22:00:15 +0000 (0:00:01.094) 0:00:08.982 **** 2025-09-27 22:10:27.461083 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-27 22:10:27.461089 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-27 22:10:27.461096 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-27 22:10:27.461102 | orchestrator | 2025-09-27 22:10:27.461109 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-27 22:10:27.461128 | orchestrator | Saturday 27 September 2025 22:00:16 +0000 (0:00:00.823) 0:00:09.806 **** 2025-09-27 22:10:27.461135 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.461142 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.461148 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.461154 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.461160 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.461167 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.461173 | orchestrator | 2025-09-27 22:10:27.461192 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-27 22:10:27.461199 | orchestrator | Saturday 27 September 2025 22:00:17 +0000 (0:00:01.141) 0:00:10.947 **** 2025-09-27 22:10:27.461205 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-27 22:10:27.461212 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-27 22:10:27.461218 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-27 22:10:27.461230 | orchestrator | 2025-09-27 22:10:27.461237 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-27 22:10:27.461243 | orchestrator | Saturday 27 September 2025 22:00:21 +0000 (0:00:03.111) 0:00:14.058 **** 2025-09-27 22:10:27.461250 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-27 22:10:27.461257 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-27 22:10:27.461264 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-27 22:10:27.461270 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.461277 | orchestrator | 2025-09-27 22:10:27.461283 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-27 22:10:27.461290 | orchestrator | Saturday 27 September 2025 22:00:22 +0000 (0:00:01.076) 0:00:15.135 **** 2025-09-27 22:10:27.461298 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.461307 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.461314 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.461321 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.461328 | orchestrator | 2025-09-27 22:10:27.461335 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-27 22:10:27.461341 | orchestrator | Saturday 27 September 2025 22:00:24 +0000 (0:00:02.299) 0:00:17.435 **** 2025-09-27 22:10:27.461350 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.461358 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.461369 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.461375 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.461381 | orchestrator | 2025-09-27 22:10:27.461387 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-27 22:10:27.461393 | orchestrator | Saturday 27 September 2025 22:00:24 +0000 (0:00:00.151) 0:00:17.586 **** 2025-09-27 22:10:27.461401 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-27 22:00:18.568732', 'end': '2025-09-27 22:00:18.930234', 'delta': '0:00:00.361502', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.461419 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-27 22:00:19.360739', 'end': '2025-09-27 22:00:19.673496', 'delta': '0:00:00.312757', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.461492 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-27 22:00:20.528935', 'end': '2025-09-27 22:00:20.807910', 'delta': '0:00:00.278975', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.461500 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.461506 | orchestrator | 2025-09-27 22:10:27.461512 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-27 22:10:27.461518 | orchestrator | Saturday 27 September 2025 22:00:25 +0000 (0:00:00.529) 0:00:18.116 **** 2025-09-27 22:10:27.461524 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.461529 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.461535 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.461541 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.461547 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.461560 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.461566 | orchestrator | 2025-09-27 22:10:27.461572 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-27 22:10:27.461578 | orchestrator | Saturday 27 September 2025 22:00:27 +0000 (0:00:02.692) 0:00:20.808 **** 2025-09-27 22:10:27.461583 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.461589 | orchestrator | 2025-09-27 22:10:27.461595 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-27 22:10:27.461601 | orchestrator | Saturday 27 September 2025 22:00:28 +0000 (0:00:00.615) 0:00:21.424 **** 2025-09-27 22:10:27.461606 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.461612 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.461618 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.461623 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.461629 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.461635 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.461641 | orchestrator | 2025-09-27 22:10:27.461646 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-27 22:10:27.461652 | orchestrator | Saturday 27 September 2025 22:00:29 +0000 (0:00:01.365) 0:00:22.790 **** 2025-09-27 22:10:27.461675 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.461682 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.461687 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.461693 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.461703 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.461716 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.461722 | orchestrator | 2025-09-27 22:10:27.461743 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-27 22:10:27.461749 | orchestrator | Saturday 27 September 2025 22:00:30 +0000 (0:00:00.881) 0:00:23.672 **** 2025-09-27 22:10:27.461755 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.461768 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.461774 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.461780 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.461836 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.461842 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.461848 | orchestrator | 2025-09-27 22:10:27.461853 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-27 22:10:27.461859 | orchestrator | Saturday 27 September 2025 22:00:31 +0000 (0:00:00.727) 0:00:24.399 **** 2025-09-27 22:10:27.461865 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.461871 | orchestrator | 2025-09-27 22:10:27.461876 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-27 22:10:27.461882 | orchestrator | Saturday 27 September 2025 22:00:31 +0000 (0:00:00.104) 0:00:24.504 **** 2025-09-27 22:10:27.461888 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.461894 | orchestrator | 2025-09-27 22:10:27.461899 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-27 22:10:27.461905 | orchestrator | Saturday 27 September 2025 22:00:31 +0000 (0:00:00.205) 0:00:24.710 **** 2025-09-27 22:10:27.461911 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.461917 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.461922 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.461928 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.461934 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.461940 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.461945 | orchestrator | 2025-09-27 22:10:27.461951 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-27 22:10:27.461961 | orchestrator | Saturday 27 September 2025 22:00:32 +0000 (0:00:00.905) 0:00:25.615 **** 2025-09-27 22:10:27.461967 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.461973 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.461979 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.461985 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.461991 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.461996 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.462002 | orchestrator | 2025-09-27 22:10:27.462008 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-27 22:10:27.462013 | orchestrator | Saturday 27 September 2025 22:00:33 +0000 (0:00:00.934) 0:00:26.550 **** 2025-09-27 22:10:27.462057 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.462063 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.462069 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.462074 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.462081 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.462086 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.462092 | orchestrator | 2025-09-27 22:10:27.462098 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-27 22:10:27.462104 | orchestrator | Saturday 27 September 2025 22:00:34 +0000 (0:00:00.657) 0:00:27.207 **** 2025-09-27 22:10:27.462121 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.462127 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.462132 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.462138 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.462143 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.462149 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.462155 | orchestrator | 2025-09-27 22:10:27.462160 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-27 22:10:27.462196 | orchestrator | Saturday 27 September 2025 22:00:34 +0000 (0:00:00.732) 0:00:27.940 **** 2025-09-27 22:10:27.462204 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.462209 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.462215 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.462221 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.462226 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.462232 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.462237 | orchestrator | 2025-09-27 22:10:27.462243 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-27 22:10:27.462249 | orchestrator | Saturday 27 September 2025 22:00:35 +0000 (0:00:00.613) 0:00:28.554 **** 2025-09-27 22:10:27.462255 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.462260 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.462266 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.462272 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.462277 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.462283 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.462289 | orchestrator | 2025-09-27 22:10:27.462294 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-27 22:10:27.462346 | orchestrator | Saturday 27 September 2025 22:00:36 +0000 (0:00:00.665) 0:00:29.219 **** 2025-09-27 22:10:27.462352 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.462358 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.462364 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.462369 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.462375 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.462381 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.462386 | orchestrator | 2025-09-27 22:10:27.462392 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-27 22:10:27.462398 | orchestrator | Saturday 27 September 2025 22:00:36 +0000 (0:00:00.536) 0:00:29.756 **** 2025-09-27 22:10:27.462404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2556ace2-5a48-42f4-80f3-8864b24f8ba9', 'scsi-SQEMU_QEMU_HARDDISK_2556ace2-5a48-42f4-80f3-8864b24f8ba9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2556ace2-5a48-42f4-80f3-8864b24f8ba9-part1', 'scsi-SQEMU_QEMU_HARDDISK_2556ace2-5a48-42f4-80f3-8864b24f8ba9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2556ace2-5a48-42f4-80f3-8864b24f8ba9-part14', 'scsi-SQEMU_QEMU_HARDDISK_2556ace2-5a48-42f4-80f3-8864b24f8ba9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2556ace2-5a48-42f4-80f3-8864b24f8ba9-part15', 'scsi-SQEMU_QEMU_HARDDISK_2556ace2-5a48-42f4-80f3-8864b24f8ba9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2556ace2-5a48-42f4-80f3-8864b24f8ba9-part16', 'scsi-SQEMU_QEMU_HARDDISK_2556ace2-5a48-42f4-80f3-8864b24f8ba9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:10:27.462488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-21-17-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:10:27.462500 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.462506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27aad776-148c-4565-8829-34bf45547489', 'scsi-SQEMU_QEMU_HARDDISK_27aad776-148c-4565-8829-34bf45547489'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27aad776-148c-4565-8829-34bf45547489-part1', 'scsi-SQEMU_QEMU_HARDDISK_27aad776-148c-4565-8829-34bf45547489-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27aad776-148c-4565-8829-34bf45547489-part14', 'scsi-SQEMU_QEMU_HARDDISK_27aad776-148c-4565-8829-34bf45547489-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27aad776-148c-4565-8829-34bf45547489-part15', 'scsi-SQEMU_QEMU_HARDDISK_27aad776-148c-4565-8829-34bf45547489-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27aad776-148c-4565-8829-34bf45547489-part16', 'scsi-SQEMU_QEMU_HARDDISK_27aad776-148c-4565-8829-34bf45547489-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:10:27.462604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-21-17-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:10:27.462618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3ef55d2f--0db9--555d--b1b6--fd7fdf57b491-osd--block--3ef55d2f--0db9--555d--b1b6--fd7fdf57b491', 'dm-uuid-LVM-wHBmOtcwELa8Z6sw5l1XCao88lHDe41j1vjTNJfV6eA0dA3MBFIkwsYpgurmYCLZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8d8c80c3--887a--53bd--bc85--16ee8bc68188-osd--block--8d8c80c3--887a--53bd--bc85--16ee8bc68188', 'dm-uuid-LVM-Rha9tU5yk0hzlXIngRcjwIXqtvE0oXBJ2HuQnvy3j6JJ85lH4xUIdHk0YgdHfOlZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462676 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.462682 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.462724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2725e2ee-fa25-4636-a6d9-d82ade82b782', 'scsi-SQEMU_QEMU_HARDDISK_2725e2ee-fa25-4636-a6d9-d82ade82b782'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2725e2ee-fa25-4636-a6d9-d82ade82b782-part1', 'scsi-SQEMU_QEMU_HARDDISK_2725e2ee-fa25-4636-a6d9-d82ade82b782-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2725e2ee-fa25-4636-a6d9-d82ade82b782-part14', 'scsi-SQEMU_QEMU_HARDDISK_2725e2ee-fa25-4636-a6d9-d82ade82b782-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2725e2ee-fa25-4636-a6d9-d82ade82b782-part15', 'scsi-SQEMU_QEMU_HARDDISK_2725e2ee-fa25-4636-a6d9-d82ade82b782-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2725e2ee-fa25-4636-a6d9-d82ade82b782-part16', 'scsi-SQEMU_QEMU_HARDDISK_2725e2ee-fa25-4636-a6d9-d82ade82b782-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:10:27.464294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--be08f40e--52da--5801--960c--910a686d222b-osd--block--be08f40e--52da--5801--960c--910a686d222b', 'dm-uuid-LVM-wyBhBSAYl05TDUHUquGlYyz9dYJLjOi8A3BI4pNW5HEe1cGzhxxFEbLHgiOTasiV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-21-17-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:10:27.464330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a2801305--6ac8--5a65--9707--7cc055d05458-osd--block--a2801305--6ac8--5a65--9707--7cc055d05458', 'dm-uuid-LVM-2HTfx83siLmEzeaVRGpOqAiM8WDfbFLGb2wYLKlcQirWYYyx1SkVf6WXy5MRnHLp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124', 'scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part1', 'scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part14', 'scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part15', 'scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part16', 'scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:10:27.464371 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3ef55d2f--0db9--555d--b1b6--fd7fdf57b491-osd--block--3ef55d2f--0db9--555d--b1b6--fd7fdf57b491'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7yqG3J-jVdW-2Whz-Ntob-bZFp-BAn1-lVFRGJ', 'scsi-0QEMU_QEMU_HARDDISK_d6e45664-99ef-4d09-8a38-5c0568f04129', 'scsi-SQEMU_QEMU_HARDDISK_d6e45664-99ef-4d09-8a38-5c0568f04129'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:10:27.464393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8d8c80c3--887a--53bd--bc85--16ee8bc68188-osd--block--8d8c80c3--887a--53bd--bc85--16ee8bc68188'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YQ81UZ-2mY5-F6Ev-t0Uq-ROnw-JQoC-30TuXd', 'scsi-0QEMU_QEMU_HARDDISK_02398e45-2b37-4a9b-beeb-c269fa72e24d', 'scsi-SQEMU_QEMU_HARDDISK_02398e45-2b37-4a9b-beeb-c269fa72e24d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:10:27.464414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7c2c329-81fb-49e1-8405-12e2c9115bb9', 'scsi-SQEMU_QEMU_HARDDISK_c7c2c329-81fb-49e1-8405-12e2c9115bb9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:10:27.464445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-21-18-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:10:27.464473 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.464484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464495 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.464505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464515 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43', 'scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part1', 'scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part14', 'scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part15', 'scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part16', 'scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:10:27.464568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--be08f40e--52da--5801--960c--910a686d222b-osd--block--be08f40e--52da--5801--960c--910a686d222b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EjMDk3-q4ZY-L2iE-GwkP-CDIm-bejd-C2yAUX', 'scsi-0QEMU_QEMU_HARDDISK_f54ee983-9faf-4784-aff9-7d79079ed7ae', 'scsi-SQEMU_QEMU_HARDDISK_f54ee983-9faf-4784-aff9-7d79079ed7ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:10:27.464579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a2801305--6ac8--5a65--9707--7cc055d05458-osd--block--a2801305--6ac8--5a65--9707--7cc055d05458'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M8B58g-tRCs-RhC0-EWdg-esdz-7oMf-9To8tD', 'scsi-0QEMU_QEMU_HARDDISK_270d9e8b-cef6-4542-9e07-9deadafed901', 'scsi-SQEMU_QEMU_HARDDISK_270d9e8b-cef6-4542-9e07-9deadafed901'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:10:27.464590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c98ed57-cbba-4a71-94c9-227184fafc60', 'scsi-SQEMU_QEMU_HARDDISK_5c98ed57-cbba-4a71-94c9-227184fafc60'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:10:27.464601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-21-17-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:10:27.464617 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.464631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2625e84f--b704--594b--a79a--2de5db7d7d7c-osd--block--2625e84f--b704--594b--a79a--2de5db7d7d7c', 'dm-uuid-LVM-tEJP5PbcSsSSbbDKu3GExl301ZWn60CibG2ckcvFkNhCVDl7QfWW2UexMu9MJeZA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--30a62591--9a6e--5933--8bc7--7c2bee7235f5-osd--block--30a62591--9a6e--5933--8bc7--7c2bee7235f5', 'dm-uuid-LVM-nDSvOLBW0ZRMe4W3sP2G9mky0pBp7fYUb3CoXsrUNg876FlEU3xKbreGgmq0VpHD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464706 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464737 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:10:27.464771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187', 'scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part1', 'scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part14', 'scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part15', 'scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part16', 'scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:10:27.464785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2625e84f--b704--594b--a79a--2de5db7d7d7c-osd--block--2625e84f--b704--594b--a79a--2de5db7d7d7c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-im9I43-NQpD-NkwB-N1DN-gEtA-HpXU-rdcCzv', 'scsi-0QEMU_QEMU_HARDDISK_c35b6dae-9fd6-477e-b9cb-11e140c89f55', 'scsi-SQEMU_QEMU_HARDDISK_c35b6dae-9fd6-477e-b9cb-11e140c89f55'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:10:27.464805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--30a62591--9a6e--5933--8bc7--7c2bee7235f5-osd--block--30a62591--9a6e--5933--8bc7--7c2bee7235f5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZrTht9-UIio-M6X3-0If5-OjbW-TIwq-RXDdvv', 'scsi-0QEMU_QEMU_HARDDISK_347ca9a0-83dc-4ac7-930f-213626cd3e96', 'scsi-SQEMU_QEMU_HARDDISK_347ca9a0-83dc-4ac7-930f-213626cd3e96'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:10:27.464817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ce21c34-3cf8-4892-a084-795bd672264f', 'scsi-SQEMU_QEMU_HARDDISK_6ce21c34-3cf8-4892-a084-795bd672264f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:10:27.464829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-21-17-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:10:27.464846 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.464857 | orchestrator | 2025-09-27 22:10:27.464869 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-27 22:10:27.464880 | orchestrator | Saturday 27 September 2025 22:00:38 +0000 (0:00:01.749) 0:00:31.505 **** 2025-09-27 22:10:27.464893 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.464904 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.464914 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.464930 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.464945 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.464955 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.464972 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.464983 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.464993 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465008 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465021 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465032 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465050 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2556ace2-5a48-42f4-80f3-8864b24f8ba9', 'scsi-SQEMU_QEMU_HARDDISK_2556ace2-5a48-42f4-80f3-8864b24f8ba9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2556ace2-5a48-42f4-80f3-8864b24f8ba9-part1', 'scsi-SQEMU_QEMU_HARDDISK_2556ace2-5a48-42f4-80f3-8864b24f8ba9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2556ace2-5a48-42f4-80f3-8864b24f8ba9-part14', 'scsi-SQEMU_QEMU_HARDDISK_2556ace2-5a48-42f4-80f3-8864b24f8ba9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2556ace2-5a48-42f4-80f3-8864b24f8ba9-part15', 'scsi-SQEMU_QEMU_HARDDISK_2556ace2-5a48-42f4-80f3-8864b24f8ba9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2556ace2-5a48-42f4-80f3-8864b24f8ba9-part16', 'scsi-SQEMU_QEMU_HARDDISK_2556ace2-5a48-42f4-80f3-8864b24f8ba9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465067 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465082 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465092 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465108 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465189 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-21-17-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465207 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27aad776-148c-4565-8829-34bf45547489', 'scsi-SQEMU_QEMU_HARDDISK_27aad776-148c-4565-8829-34bf45547489'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27aad776-148c-4565-8829-34bf45547489-part1', 'scsi-SQEMU_QEMU_HARDDISK_27aad776-148c-4565-8829-34bf45547489-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27aad776-148c-4565-8829-34bf45547489-part14', 'scsi-SQEMU_QEMU_HARDDISK_27aad776-148c-4565-8829-34bf45547489-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27aad776-148c-4565-8829-34bf45547489-part15', 'scsi-SQEMU_QEMU_HARDDISK_27aad776-148c-4565-8829-34bf45547489-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27aad776-148c-4565-8829-34bf45547489-part16', 'scsi-SQEMU_QEMU_HARDDISK_27aad776-148c-4565-8829-34bf45547489-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465234 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-21-17-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465251 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.465276 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465292 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465309 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465337 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465483 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465517 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465545 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465562 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465586 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2725e2ee-fa25-4636-a6d9-d82ade82b782', 'scsi-SQEMU_QEMU_HARDDISK_2725e2ee-fa25-4636-a6d9-d82ade82b782'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2725e2ee-fa25-4636-a6d9-d82ade82b782-part1', 'scsi-SQEMU_QEMU_HARDDISK_2725e2ee-fa25-4636-a6d9-d82ade82b782-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2725e2ee-fa25-4636-a6d9-d82ade82b782-part14', 'scsi-SQEMU_QEMU_HARDDISK_2725e2ee-fa25-4636-a6d9-d82ade82b782-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2725e2ee-fa25-4636-a6d9-d82ade82b782-part15', 'scsi-SQEMU_QEMU_HARDDISK_2725e2ee-fa25-4636-a6d9-d82ade82b782-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2725e2ee-fa25-4636-a6d9-d82ade82b782-part16', 'scsi-SQEMU_QEMU_HARDDISK_2725e2ee-fa25-4636-a6d9-d82ade82b782-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465615 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-21-17-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465632 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.465660 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3ef55d2f--0db9--555d--b1b6--fd7fdf57b491-osd--block--3ef55d2f--0db9--555d--b1b6--fd7fdf57b491', 'dm-uuid-LVM-wHBmOtcwELa8Z6sw5l1XCao88lHDe41j1vjTNJfV6eA0dA3MBFIkwsYpgurmYCLZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465676 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8d8c80c3--887a--53bd--bc85--16ee8bc68188-osd--block--8d8c80c3--887a--53bd--bc85--16ee8bc68188', 'dm-uuid-LVM-Rha9tU5yk0hzlXIngRcjwIXqtvE0oXBJ2HuQnvy3j6JJ85lH4xUIdHk0YgdHfOlZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465695 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465708 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465726 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465741 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465763 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465778 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465798 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465812 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465839 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124', 'scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part1', 'scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part14', 'scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part15', 'scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part16', 'scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465857 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3ef55d2f--0db9--555d--b1b6--fd7fdf57b491-osd--block--3ef55d2f--0db9--555d--b1b6--fd7fdf57b491'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7yqG3J-jVdW-2Whz-Ntob-bZFp-BAn1-lVFRGJ', 'scsi-0QEMU_QEMU_HARDDISK_d6e45664-99ef-4d09-8a38-5c0568f04129', 'scsi-SQEMU_QEMU_HARDDISK_d6e45664-99ef-4d09-8a38-5c0568f04129'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465881 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.465895 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8d8c80c3--887a--53bd--bc85--16ee8bc68188-osd--block--8d8c80c3--887a--53bd--bc85--16ee8bc68188'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YQ81UZ-2mY5-F6Ev-t0Uq-ROnw-JQoC-30TuXd', 'scsi-0QEMU_QEMU_HARDDISK_02398e45-2b37-4a9b-beeb-c269fa72e24d', 'scsi-SQEMU_QEMU_HARDDISK_02398e45-2b37-4a9b-beeb-c269fa72e24d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465915 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7c2c329-81fb-49e1-8405-12e2c9115bb9', 'scsi-SQEMU_QEMU_HARDDISK_c7c2c329-81fb-49e1-8405-12e2c9115bb9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.465929 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-21-18-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466492 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--be08f40e--52da--5801--960c--910a686d222b-osd--block--be08f40e--52da--5801--960c--910a686d222b', 'dm-uuid-LVM-wyBhBSAYl05TDUHUquGlYyz9dYJLjOi8A3BI4pNW5HEe1cGzhxxFEbLHgiOTasiV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466549 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a2801305--6ac8--5a65--9707--7cc055d05458-osd--block--a2801305--6ac8--5a65--9707--7cc055d05458', 'dm-uuid-LVM-2HTfx83siLmEzeaVRGpOqAiM8WDfbFLGb2wYLKlcQirWYYyx1SkVf6WXy5MRnHLp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466560 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466568 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466584 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466592 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466601 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.466618 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466632 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466641 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2625e84f--b704--594b--a79a--2de5db7d7d7c-osd--block--2625e84f--b704--594b--a79a--2de5db7d7d7c', 'dm-uuid-LVM-tEJP5PbcSsSSbbDKu3GExl301ZWn60CibG2ckcvFkNhCVDl7QfWW2UexMu9MJeZA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466649 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466661 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--30a62591--9a6e--5933--8bc7--7c2bee7235f5-osd--block--30a62591--9a6e--5933--8bc7--7c2bee7235f5', 'dm-uuid-LVM-nDSvOLBW0ZRMe4W3sP2G9mky0pBp7fYUb3CoXsrUNg876FlEU3xKbreGgmq0VpHD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466669 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466684 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466702 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466711 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466725 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43', 'scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part1', 'scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part14', 'scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part15', 'scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part16', 'scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466741 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466755 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466763 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--be08f40e--52da--5801--960c--910a686d222b-osd--block--be08f40e--52da--5801--960c--910a686d222b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EjMDk3-q4ZY-L2iE-GwkP-CDIm-bejd-C2yAUX', 'scsi-0QEMU_QEMU_HARDDISK_f54ee983-9faf-4784-aff9-7d79079ed7ae', 'scsi-SQEMU_QEMU_HARDDISK_f54ee983-9faf-4784-aff9-7d79079ed7ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466772 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466784 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a2801305--6ac8--5a65--9707--7cc055d05458-osd--block--a2801305--6ac8--5a65--9707--7cc055d05458'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M8B58g-tRCs-RhC0-EWdg-esdz-7oMf-9To8tD', 'scsi-0QEMU_QEMU_HARDDISK_270d9e8b-cef6-4542-9e07-9deadafed901', 'scsi-SQEMU_QEMU_HARDDISK_270d9e8b-cef6-4542-9e07-9deadafed901'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466792 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466812 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c98ed57-cbba-4a71-94c9-227184fafc60', 'scsi-SQEMU_QEMU_HARDDISK_5c98ed57-cbba-4a71-94c9-227184fafc60'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466821 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466835 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187', 'scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part1', 'scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part14', 'scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part15', 'scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part16', 'scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466850 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-21-17-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466868 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2625e84f--b704--594b--a79a--2de5db7d7d7c-osd--block--2625e84f--b704--594b--a79a--2de5db7d7d7c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-im9I43-NQpD-NkwB-N1DN-gEtA-HpXU-rdcCzv', 'scsi-0QEMU_QEMU_HARDDISK_c35b6dae-9fd6-477e-b9cb-11e140c89f55', 'scsi-SQEMU_QEMU_HARDDISK_c35b6dae-9fd6-477e-b9cb-11e140c89f55'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466876 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.466884 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--30a62591--9a6e--5933--8bc7--7c2bee7235f5-osd--block--30a62591--9a6e--5933--8bc7--7c2bee7235f5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZrTht9-UIio-M6X3-0If5-OjbW-TIwq-RXDdvv', 'scsi-0QEMU_QEMU_HARDDISK_347ca9a0-83dc-4ac7-930f-213626cd3e96', 'scsi-SQEMU_QEMU_HARDDISK_347ca9a0-83dc-4ac7-930f-213626cd3e96'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466897 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ce21c34-3cf8-4892-a084-795bd672264f', 'scsi-SQEMU_QEMU_HARDDISK_6ce21c34-3cf8-4892-a084-795bd672264f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466905 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-21-17-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:10:27.466919 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.466927 | orchestrator | 2025-09-27 22:10:27.466935 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-27 22:10:27.466943 | orchestrator | Saturday 27 September 2025 22:00:39 +0000 (0:00:01.401) 0:00:32.907 **** 2025-09-27 22:10:27.466951 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.466960 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.466968 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.466980 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.466988 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.466996 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.467004 | orchestrator | 2025-09-27 22:10:27.467012 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-27 22:10:27.467020 | orchestrator | Saturday 27 September 2025 22:00:42 +0000 (0:00:02.334) 0:00:35.242 **** 2025-09-27 22:10:27.467027 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.467035 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.467043 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.467051 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.467064 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.467077 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.467090 | orchestrator | 2025-09-27 22:10:27.467105 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-27 22:10:27.467185 | orchestrator | Saturday 27 September 2025 22:00:42 +0000 (0:00:00.727) 0:00:35.969 **** 2025-09-27 22:10:27.467196 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.467205 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.467214 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.467224 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.467238 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.467251 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.467264 | orchestrator | 2025-09-27 22:10:27.467277 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-27 22:10:27.467290 | orchestrator | Saturday 27 September 2025 22:00:43 +0000 (0:00:00.951) 0:00:36.921 **** 2025-09-27 22:10:27.467302 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.467316 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.467328 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.467341 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.467354 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.467367 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.467379 | orchestrator | 2025-09-27 22:10:27.467391 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-27 22:10:27.467405 | orchestrator | Saturday 27 September 2025 22:00:44 +0000 (0:00:00.947) 0:00:37.868 **** 2025-09-27 22:10:27.467417 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.467430 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.467443 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.467455 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.467466 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.467477 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.467489 | orchestrator | 2025-09-27 22:10:27.467500 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-27 22:10:27.467513 | orchestrator | Saturday 27 September 2025 22:00:45 +0000 (0:00:00.753) 0:00:38.622 **** 2025-09-27 22:10:27.467526 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.467539 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.467551 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.467564 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.467576 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.467589 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.467601 | orchestrator | 2025-09-27 22:10:27.467614 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-27 22:10:27.467638 | orchestrator | Saturday 27 September 2025 22:00:46 +0000 (0:00:01.210) 0:00:39.833 **** 2025-09-27 22:10:27.467653 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-27 22:10:27.467666 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-27 22:10:27.467679 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-27 22:10:27.467691 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-27 22:10:27.467705 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-27 22:10:27.467716 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-27 22:10:27.467727 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-27 22:10:27.467737 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-27 22:10:27.467749 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-27 22:10:27.467760 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-27 22:10:27.467771 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-27 22:10:27.467789 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-27 22:10:27.467800 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-27 22:10:27.467812 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-27 22:10:27.467823 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-27 22:10:27.467834 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-27 22:10:27.467845 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-27 22:10:27.467857 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-27 22:10:27.467869 | orchestrator | 2025-09-27 22:10:27.467880 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-27 22:10:27.467888 | orchestrator | Saturday 27 September 2025 22:00:49 +0000 (0:00:03.015) 0:00:42.848 **** 2025-09-27 22:10:27.467895 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-27 22:10:27.467902 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-27 22:10:27.467909 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-27 22:10:27.467915 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.467922 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-27 22:10:27.467929 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-27 22:10:27.467935 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-27 22:10:27.467942 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.467949 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-27 22:10:27.467955 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-27 22:10:27.467962 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-27 22:10:27.467969 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.467984 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-27 22:10:27.467991 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-27 22:10:27.467997 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-27 22:10:27.468004 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-27 22:10:27.468010 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-27 22:10:27.468017 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.468024 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-27 22:10:27.468030 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.468037 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-27 22:10:27.468044 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-27 22:10:27.468051 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-27 22:10:27.468057 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.468064 | orchestrator | 2025-09-27 22:10:27.468071 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-27 22:10:27.468085 | orchestrator | Saturday 27 September 2025 22:00:50 +0000 (0:00:01.008) 0:00:43.857 **** 2025-09-27 22:10:27.468092 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.468098 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.468105 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.468136 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.468144 | orchestrator | 2025-09-27 22:10:27.468151 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-27 22:10:27.468159 | orchestrator | Saturday 27 September 2025 22:00:52 +0000 (0:00:01.368) 0:00:45.226 **** 2025-09-27 22:10:27.468166 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.468172 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.468179 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.468188 | orchestrator | 2025-09-27 22:10:27.468197 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-27 22:10:27.468206 | orchestrator | Saturday 27 September 2025 22:00:52 +0000 (0:00:00.396) 0:00:45.623 **** 2025-09-27 22:10:27.468216 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.468226 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.468236 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.468247 | orchestrator | 2025-09-27 22:10:27.468258 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-27 22:10:27.468269 | orchestrator | Saturday 27 September 2025 22:00:53 +0000 (0:00:00.435) 0:00:46.058 **** 2025-09-27 22:10:27.468280 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.468291 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.468302 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.468313 | orchestrator | 2025-09-27 22:10:27.468323 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-27 22:10:27.468334 | orchestrator | Saturday 27 September 2025 22:00:53 +0000 (0:00:00.502) 0:00:46.560 **** 2025-09-27 22:10:27.468345 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.468355 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.468365 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.468372 | orchestrator | 2025-09-27 22:10:27.468379 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-27 22:10:27.468386 | orchestrator | Saturday 27 September 2025 22:00:54 +0000 (0:00:00.676) 0:00:47.237 **** 2025-09-27 22:10:27.468392 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 22:10:27.468399 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 22:10:27.468406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 22:10:27.468412 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.468419 | orchestrator | 2025-09-27 22:10:27.468426 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-27 22:10:27.468432 | orchestrator | Saturday 27 September 2025 22:00:54 +0000 (0:00:00.574) 0:00:47.811 **** 2025-09-27 22:10:27.468439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 22:10:27.468451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 22:10:27.468459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 22:10:27.468465 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.468472 | orchestrator | 2025-09-27 22:10:27.468479 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-27 22:10:27.468486 | orchestrator | Saturday 27 September 2025 22:00:55 +0000 (0:00:00.431) 0:00:48.243 **** 2025-09-27 22:10:27.468492 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 22:10:27.468499 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 22:10:27.468506 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 22:10:27.468519 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.468525 | orchestrator | 2025-09-27 22:10:27.468532 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-27 22:10:27.468539 | orchestrator | Saturday 27 September 2025 22:00:55 +0000 (0:00:00.544) 0:00:48.788 **** 2025-09-27 22:10:27.468546 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.468553 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.468560 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.468566 | orchestrator | 2025-09-27 22:10:27.468573 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-27 22:10:27.468580 | orchestrator | Saturday 27 September 2025 22:00:56 +0000 (0:00:00.334) 0:00:49.122 **** 2025-09-27 22:10:27.468587 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-27 22:10:27.468593 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-27 22:10:27.468600 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-27 22:10:27.468607 | orchestrator | 2025-09-27 22:10:27.468613 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-27 22:10:27.468620 | orchestrator | Saturday 27 September 2025 22:00:56 +0000 (0:00:00.733) 0:00:49.856 **** 2025-09-27 22:10:27.468633 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-27 22:10:27.468640 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-27 22:10:27.468647 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-27 22:10:27.468654 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-09-27 22:10:27.468660 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-27 22:10:27.468667 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-27 22:10:27.468674 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-27 22:10:27.468681 | orchestrator | 2025-09-27 22:10:27.468687 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-27 22:10:27.468694 | orchestrator | Saturday 27 September 2025 22:00:57 +0000 (0:00:01.146) 0:00:51.003 **** 2025-09-27 22:10:27.468700 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-27 22:10:27.468707 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-27 22:10:27.468713 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-27 22:10:27.468720 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-09-27 22:10:27.468727 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-27 22:10:27.468733 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-27 22:10:27.468740 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-27 22:10:27.468746 | orchestrator | 2025-09-27 22:10:27.468753 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-27 22:10:27.468759 | orchestrator | Saturday 27 September 2025 22:00:59 +0000 (0:00:01.821) 0:00:52.825 **** 2025-09-27 22:10:27.468766 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.468775 | orchestrator | 2025-09-27 22:10:27.468781 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-27 22:10:27.468788 | orchestrator | Saturday 27 September 2025 22:01:00 +0000 (0:00:00.868) 0:00:53.693 **** 2025-09-27 22:10:27.468795 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.468802 | orchestrator | 2025-09-27 22:10:27.468808 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-27 22:10:27.468819 | orchestrator | Saturday 27 September 2025 22:01:01 +0000 (0:00:00.971) 0:00:54.665 **** 2025-09-27 22:10:27.468826 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.468833 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.468840 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.468847 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.468858 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.468869 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.468880 | orchestrator | 2025-09-27 22:10:27.468891 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-27 22:10:27.468902 | orchestrator | Saturday 27 September 2025 22:01:02 +0000 (0:00:01.070) 0:00:55.735 **** 2025-09-27 22:10:27.468913 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.468924 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.468935 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.468946 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.468958 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.468967 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.468974 | orchestrator | 2025-09-27 22:10:27.468993 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-27 22:10:27.469005 | orchestrator | Saturday 27 September 2025 22:01:04 +0000 (0:00:01.412) 0:00:57.148 **** 2025-09-27 22:10:27.469016 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.469026 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.469037 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.469049 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.469060 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.469071 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.469082 | orchestrator | 2025-09-27 22:10:27.469094 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-27 22:10:27.469101 | orchestrator | Saturday 27 September 2025 22:01:06 +0000 (0:00:01.983) 0:00:59.131 **** 2025-09-27 22:10:27.469108 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.469137 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.469143 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.469150 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.469157 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.469164 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.469170 | orchestrator | 2025-09-27 22:10:27.469177 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-27 22:10:27.469184 | orchestrator | Saturday 27 September 2025 22:01:07 +0000 (0:00:01.244) 0:01:00.375 **** 2025-09-27 22:10:27.469190 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.469197 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.469204 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.469211 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.469217 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.469224 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.469230 | orchestrator | 2025-09-27 22:10:27.469237 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-27 22:10:27.469244 | orchestrator | Saturday 27 September 2025 22:01:08 +0000 (0:00:00.748) 0:01:01.124 **** 2025-09-27 22:10:27.469257 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.469264 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.469271 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.469277 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.469284 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.469291 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.469297 | orchestrator | 2025-09-27 22:10:27.469304 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-27 22:10:27.469310 | orchestrator | Saturday 27 September 2025 22:01:08 +0000 (0:00:00.689) 0:01:01.814 **** 2025-09-27 22:10:27.469317 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.469324 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.469337 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.469344 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.469351 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.469358 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.469365 | orchestrator | 2025-09-27 22:10:27.469371 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-27 22:10:27.469378 | orchestrator | Saturday 27 September 2025 22:01:09 +0000 (0:00:00.569) 0:01:02.384 **** 2025-09-27 22:10:27.469385 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.469391 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.469398 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.469404 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.469411 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.469417 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.469424 | orchestrator | 2025-09-27 22:10:27.469430 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-27 22:10:27.469437 | orchestrator | Saturday 27 September 2025 22:01:10 +0000 (0:00:01.341) 0:01:03.726 **** 2025-09-27 22:10:27.469444 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.469450 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.469457 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.469463 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.469470 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.469476 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.469483 | orchestrator | 2025-09-27 22:10:27.469490 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-27 22:10:27.469496 | orchestrator | Saturday 27 September 2025 22:01:11 +0000 (0:00:00.986) 0:01:04.712 **** 2025-09-27 22:10:27.469503 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.469510 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.469517 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.469523 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.469530 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.469536 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.469543 | orchestrator | 2025-09-27 22:10:27.469549 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-27 22:10:27.469556 | orchestrator | Saturday 27 September 2025 22:01:12 +0000 (0:00:01.007) 0:01:05.720 **** 2025-09-27 22:10:27.469563 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.469570 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.469576 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.469583 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.469590 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.469596 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.469603 | orchestrator | 2025-09-27 22:10:27.469610 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-27 22:10:27.469616 | orchestrator | Saturday 27 September 2025 22:01:13 +0000 (0:00:00.649) 0:01:06.370 **** 2025-09-27 22:10:27.469623 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.469630 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.469636 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.469643 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.469650 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.469656 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.469663 | orchestrator | 2025-09-27 22:10:27.469669 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-27 22:10:27.469676 | orchestrator | Saturday 27 September 2025 22:01:14 +0000 (0:00:00.775) 0:01:07.145 **** 2025-09-27 22:10:27.469683 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.469689 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.469696 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.469702 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.469709 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.469715 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.469727 | orchestrator | 2025-09-27 22:10:27.469737 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-27 22:10:27.469744 | orchestrator | Saturday 27 September 2025 22:01:14 +0000 (0:00:00.480) 0:01:07.626 **** 2025-09-27 22:10:27.469751 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.469757 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.469764 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.469770 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.469777 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.469783 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.469790 | orchestrator | 2025-09-27 22:10:27.469797 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-27 22:10:27.469803 | orchestrator | Saturday 27 September 2025 22:01:15 +0000 (0:00:00.584) 0:01:08.211 **** 2025-09-27 22:10:27.469810 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.469816 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.469823 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.469829 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.469836 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.469843 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.469849 | orchestrator | 2025-09-27 22:10:27.469856 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-27 22:10:27.469863 | orchestrator | Saturday 27 September 2025 22:01:15 +0000 (0:00:00.548) 0:01:08.759 **** 2025-09-27 22:10:27.469870 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.469876 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.469883 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.469889 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.469896 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.469902 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.469909 | orchestrator | 2025-09-27 22:10:27.469915 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-27 22:10:27.469926 | orchestrator | Saturday 27 September 2025 22:01:16 +0000 (0:00:00.790) 0:01:09.550 **** 2025-09-27 22:10:27.469933 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.469939 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.469946 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.469953 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.469959 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.469966 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.469972 | orchestrator | 2025-09-27 22:10:27.469979 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-27 22:10:27.469985 | orchestrator | Saturday 27 September 2025 22:01:17 +0000 (0:00:00.649) 0:01:10.199 **** 2025-09-27 22:10:27.469992 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.469999 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.470005 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.470012 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.470059 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.470066 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.470073 | orchestrator | 2025-09-27 22:10:27.470079 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-27 22:10:27.470086 | orchestrator | Saturday 27 September 2025 22:01:17 +0000 (0:00:00.683) 0:01:10.882 **** 2025-09-27 22:10:27.470093 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.470099 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.470106 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.470131 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.470138 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.470145 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.470152 | orchestrator | 2025-09-27 22:10:27.470158 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-27 22:10:27.470165 | orchestrator | Saturday 27 September 2025 22:01:18 +0000 (0:00:01.109) 0:01:11.992 **** 2025-09-27 22:10:27.470172 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.470184 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.470191 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.470198 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.470204 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.470211 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.470227 | orchestrator | 2025-09-27 22:10:27.470234 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-27 22:10:27.470241 | orchestrator | Saturday 27 September 2025 22:01:20 +0000 (0:00:01.385) 0:01:13.378 **** 2025-09-27 22:10:27.470248 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.470254 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.470261 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.470267 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.470274 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.470280 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.470287 | orchestrator | 2025-09-27 22:10:27.470294 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-27 22:10:27.470301 | orchestrator | Saturday 27 September 2025 22:01:22 +0000 (0:00:02.234) 0:01:15.612 **** 2025-09-27 22:10:27.470308 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.470315 | orchestrator | 2025-09-27 22:10:27.470322 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-27 22:10:27.470329 | orchestrator | Saturday 27 September 2025 22:01:23 +0000 (0:00:01.217) 0:01:16.830 **** 2025-09-27 22:10:27.470335 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.470342 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.470349 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.470355 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.470362 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.470369 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.470375 | orchestrator | 2025-09-27 22:10:27.470382 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-27 22:10:27.470389 | orchestrator | Saturday 27 September 2025 22:01:24 +0000 (0:00:00.597) 0:01:17.427 **** 2025-09-27 22:10:27.470395 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.470402 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.470408 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.470415 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.470422 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.470433 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.470439 | orchestrator | 2025-09-27 22:10:27.470446 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-27 22:10:27.470452 | orchestrator | Saturday 27 September 2025 22:01:25 +0000 (0:00:00.645) 0:01:18.072 **** 2025-09-27 22:10:27.470459 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-27 22:10:27.470466 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-27 22:10:27.470473 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-27 22:10:27.470479 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-27 22:10:27.470486 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-27 22:10:27.470493 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-27 22:10:27.470499 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-27 22:10:27.470506 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-27 22:10:27.470513 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-27 22:10:27.470520 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-27 22:10:27.470532 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-27 22:10:27.470538 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-27 22:10:27.470545 | orchestrator | 2025-09-27 22:10:27.470567 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-27 22:10:27.470574 | orchestrator | Saturday 27 September 2025 22:01:26 +0000 (0:00:01.189) 0:01:19.262 **** 2025-09-27 22:10:27.470581 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.470588 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.470594 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.470601 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.470608 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.470615 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.470621 | orchestrator | 2025-09-27 22:10:27.470628 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-27 22:10:27.470635 | orchestrator | Saturday 27 September 2025 22:01:27 +0000 (0:00:01.017) 0:01:20.279 **** 2025-09-27 22:10:27.470641 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.470648 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.470655 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.470662 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.470668 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.470675 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.470681 | orchestrator | 2025-09-27 22:10:27.470688 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-27 22:10:27.470694 | orchestrator | Saturday 27 September 2025 22:01:27 +0000 (0:00:00.485) 0:01:20.765 **** 2025-09-27 22:10:27.470701 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.470708 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.470715 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.470721 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.470728 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.470734 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.470741 | orchestrator | 2025-09-27 22:10:27.470747 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-27 22:10:27.470754 | orchestrator | Saturday 27 September 2025 22:01:28 +0000 (0:00:00.616) 0:01:21.381 **** 2025-09-27 22:10:27.470761 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.470767 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.470774 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.470781 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.470788 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.470795 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.470801 | orchestrator | 2025-09-27 22:10:27.470808 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-27 22:10:27.470815 | orchestrator | Saturday 27 September 2025 22:01:28 +0000 (0:00:00.514) 0:01:21.896 **** 2025-09-27 22:10:27.470822 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.470829 | orchestrator | 2025-09-27 22:10:27.470835 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-27 22:10:27.470842 | orchestrator | Saturday 27 September 2025 22:01:29 +0000 (0:00:00.988) 0:01:22.884 **** 2025-09-27 22:10:27.470849 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.470856 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.470863 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.470870 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.470876 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.470883 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.470890 | orchestrator | 2025-09-27 22:10:27.470897 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-27 22:10:27.470908 | orchestrator | Saturday 27 September 2025 22:02:14 +0000 (0:00:44.653) 0:02:07.538 **** 2025-09-27 22:10:27.470915 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-27 22:10:27.470922 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-27 22:10:27.470928 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-27 22:10:27.470935 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.470942 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-27 22:10:27.470953 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-27 22:10:27.470960 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-27 22:10:27.470966 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.470973 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-27 22:10:27.470980 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-27 22:10:27.470987 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-27 22:10:27.470994 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.471000 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-27 22:10:27.471007 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-27 22:10:27.471014 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-27 22:10:27.471020 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.471027 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-27 22:10:27.471034 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-27 22:10:27.471041 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-27 22:10:27.471048 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.471055 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-27 22:10:27.471062 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-27 22:10:27.471069 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-27 22:10:27.471088 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.471096 | orchestrator | 2025-09-27 22:10:27.471103 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-27 22:10:27.471123 | orchestrator | Saturday 27 September 2025 22:02:15 +0000 (0:00:00.608) 0:02:08.147 **** 2025-09-27 22:10:27.471131 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.471138 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.471145 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.471152 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.471159 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.471166 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.471172 | orchestrator | 2025-09-27 22:10:27.471179 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-27 22:10:27.471186 | orchestrator | Saturday 27 September 2025 22:02:15 +0000 (0:00:00.536) 0:02:08.683 **** 2025-09-27 22:10:27.471193 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.471199 | orchestrator | 2025-09-27 22:10:27.471206 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-27 22:10:27.471213 | orchestrator | Saturday 27 September 2025 22:02:16 +0000 (0:00:00.360) 0:02:09.043 **** 2025-09-27 22:10:27.471220 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.471226 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.471233 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.471239 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.471246 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.471258 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.471265 | orchestrator | 2025-09-27 22:10:27.471272 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-27 22:10:27.471279 | orchestrator | Saturday 27 September 2025 22:02:16 +0000 (0:00:00.582) 0:02:09.626 **** 2025-09-27 22:10:27.471286 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.471293 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.471299 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.471306 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.471313 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.471319 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.471326 | orchestrator | 2025-09-27 22:10:27.471333 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-27 22:10:27.471340 | orchestrator | Saturday 27 September 2025 22:02:17 +0000 (0:00:00.792) 0:02:10.418 **** 2025-09-27 22:10:27.471346 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.471353 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.471360 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.471366 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.471372 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.471379 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.471386 | orchestrator | 2025-09-27 22:10:27.471392 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-27 22:10:27.471399 | orchestrator | Saturday 27 September 2025 22:02:18 +0000 (0:00:00.602) 0:02:11.020 **** 2025-09-27 22:10:27.471406 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.471412 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.471419 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.471425 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.471432 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.471438 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.471446 | orchestrator | 2025-09-27 22:10:27.471453 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-27 22:10:27.471460 | orchestrator | Saturday 27 September 2025 22:02:20 +0000 (0:00:02.340) 0:02:13.361 **** 2025-09-27 22:10:27.471466 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.471472 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.471479 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.471486 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.471492 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.471499 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.471505 | orchestrator | 2025-09-27 22:10:27.471512 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-27 22:10:27.471519 | orchestrator | Saturday 27 September 2025 22:02:21 +0000 (0:00:00.736) 0:02:14.097 **** 2025-09-27 22:10:27.471531 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.471540 | orchestrator | 2025-09-27 22:10:27.471547 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-27 22:10:27.471554 | orchestrator | Saturday 27 September 2025 22:02:22 +0000 (0:00:01.114) 0:02:15.212 **** 2025-09-27 22:10:27.471561 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.471567 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.471574 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.471580 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.471587 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.471594 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.471601 | orchestrator | 2025-09-27 22:10:27.471607 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-27 22:10:27.471614 | orchestrator | Saturday 27 September 2025 22:02:22 +0000 (0:00:00.599) 0:02:15.811 **** 2025-09-27 22:10:27.471621 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.471627 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.471639 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.471646 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.471653 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.471660 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.471666 | orchestrator | 2025-09-27 22:10:27.471673 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-27 22:10:27.471680 | orchestrator | Saturday 27 September 2025 22:02:23 +0000 (0:00:00.698) 0:02:16.509 **** 2025-09-27 22:10:27.471686 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.471693 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.471699 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.471706 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.471712 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.471719 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.471725 | orchestrator | 2025-09-27 22:10:27.471732 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-27 22:10:27.471752 | orchestrator | Saturday 27 September 2025 22:02:24 +0000 (0:00:00.563) 0:02:17.072 **** 2025-09-27 22:10:27.471760 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.471766 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.471773 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.471780 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.471786 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.471793 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.471799 | orchestrator | 2025-09-27 22:10:27.471806 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-27 22:10:27.471813 | orchestrator | Saturday 27 September 2025 22:02:24 +0000 (0:00:00.766) 0:02:17.839 **** 2025-09-27 22:10:27.471819 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.471826 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.471833 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.471839 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.471846 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.471852 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.471859 | orchestrator | 2025-09-27 22:10:27.471865 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-27 22:10:27.471872 | orchestrator | Saturday 27 September 2025 22:02:25 +0000 (0:00:00.668) 0:02:18.508 **** 2025-09-27 22:10:27.471879 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.471885 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.471892 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.471899 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.471905 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.471912 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.471919 | orchestrator | 2025-09-27 22:10:27.471925 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-27 22:10:27.471932 | orchestrator | Saturday 27 September 2025 22:02:26 +0000 (0:00:00.823) 0:02:19.332 **** 2025-09-27 22:10:27.471939 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.471945 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.471952 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.471958 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.471965 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.471971 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.471978 | orchestrator | 2025-09-27 22:10:27.471985 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-27 22:10:27.471991 | orchestrator | Saturday 27 September 2025 22:02:27 +0000 (0:00:00.682) 0:02:20.014 **** 2025-09-27 22:10:27.471998 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.472004 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.472011 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.472018 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.472025 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.472036 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.472043 | orchestrator | 2025-09-27 22:10:27.472050 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-27 22:10:27.472056 | orchestrator | Saturday 27 September 2025 22:02:27 +0000 (0:00:00.903) 0:02:20.917 **** 2025-09-27 22:10:27.472063 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.472070 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.472076 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.472083 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.472090 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.472097 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.472103 | orchestrator | 2025-09-27 22:10:27.472150 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-27 22:10:27.472159 | orchestrator | Saturday 27 September 2025 22:02:29 +0000 (0:00:01.198) 0:02:22.116 **** 2025-09-27 22:10:27.472166 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2025-09-27 22:10:27.472172 | orchestrator | 2025-09-27 22:10:27.472179 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-27 22:10:27.472186 | orchestrator | Saturday 27 September 2025 22:02:30 +0000 (0:00:01.055) 0:02:23.172 **** 2025-09-27 22:10:27.472192 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-27 22:10:27.472203 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-27 22:10:27.472210 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-27 22:10:27.472216 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-27 22:10:27.472223 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-27 22:10:27.472230 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-27 22:10:27.472236 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-27 22:10:27.472243 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-27 22:10:27.472250 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-27 22:10:27.472257 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-27 22:10:27.472264 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-27 22:10:27.472270 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-27 22:10:27.472277 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-27 22:10:27.472283 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-27 22:10:27.472290 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-27 22:10:27.472297 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-27 22:10:27.472303 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-27 22:10:27.472310 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-27 22:10:27.472316 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-27 22:10:27.472323 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-27 22:10:27.472330 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-27 22:10:27.472350 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-27 22:10:27.472358 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-27 22:10:27.472364 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-27 22:10:27.472371 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-27 22:10:27.472377 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-27 22:10:27.472384 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-27 22:10:27.472391 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-27 22:10:27.472397 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-27 22:10:27.472410 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-27 22:10:27.472416 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-27 22:10:27.472423 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-27 22:10:27.472429 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-27 22:10:27.472436 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-27 22:10:27.472443 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-27 22:10:27.472449 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-27 22:10:27.472456 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-27 22:10:27.472462 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-27 22:10:27.472469 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-27 22:10:27.472475 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-27 22:10:27.472482 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-27 22:10:27.472488 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-27 22:10:27.472495 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-27 22:10:27.472501 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-27 22:10:27.472508 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-27 22:10:27.472514 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-27 22:10:27.472521 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-27 22:10:27.472527 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-27 22:10:27.472534 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-27 22:10:27.472540 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-27 22:10:27.472547 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-27 22:10:27.472553 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-27 22:10:27.472560 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-27 22:10:27.472566 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-27 22:10:27.472573 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-27 22:10:27.472579 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-27 22:10:27.472586 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-27 22:10:27.472593 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-27 22:10:27.472599 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-27 22:10:27.472606 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-27 22:10:27.472612 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-27 22:10:27.472619 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-27 22:10:27.472632 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-27 22:10:27.472639 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-27 22:10:27.472645 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-27 22:10:27.472652 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-27 22:10:27.472658 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-27 22:10:27.472665 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-27 22:10:27.472671 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-27 22:10:27.472678 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-27 22:10:27.472689 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-27 22:10:27.472695 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-27 22:10:27.472701 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-27 22:10:27.472708 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-27 22:10:27.472714 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-27 22:10:27.472720 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-27 22:10:27.472726 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-27 22:10:27.472732 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-27 22:10:27.472738 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-27 22:10:27.472755 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-27 22:10:27.472762 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-27 22:10:27.472768 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-27 22:10:27.472774 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-27 22:10:27.472781 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-27 22:10:27.472787 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-27 22:10:27.472793 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-27 22:10:27.472799 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-27 22:10:27.472805 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-27 22:10:27.472811 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-27 22:10:27.472817 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-27 22:10:27.472823 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-27 22:10:27.472829 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-27 22:10:27.472835 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-27 22:10:27.472841 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-27 22:10:27.472847 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-27 22:10:27.472853 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-27 22:10:27.472859 | orchestrator | 2025-09-27 22:10:27.472865 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-27 22:10:27.472872 | orchestrator | Saturday 27 September 2025 22:02:36 +0000 (0:00:06.344) 0:02:29.516 **** 2025-09-27 22:10:27.472878 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.472884 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.472890 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.472896 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.472903 | orchestrator | 2025-09-27 22:10:27.472909 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-27 22:10:27.472915 | orchestrator | Saturday 27 September 2025 22:02:37 +0000 (0:00:00.939) 0:02:30.455 **** 2025-09-27 22:10:27.472921 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-27 22:10:27.472928 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-27 22:10:27.472934 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-27 22:10:27.472940 | orchestrator | 2025-09-27 22:10:27.472946 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-27 22:10:27.472957 | orchestrator | Saturday 27 September 2025 22:02:38 +0000 (0:00:00.723) 0:02:31.179 **** 2025-09-27 22:10:27.472963 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-27 22:10:27.472969 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-27 22:10:27.472975 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-27 22:10:27.472982 | orchestrator | 2025-09-27 22:10:27.472988 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-27 22:10:27.472997 | orchestrator | Saturday 27 September 2025 22:02:39 +0000 (0:00:01.398) 0:02:32.577 **** 2025-09-27 22:10:27.473003 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.473009 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.473016 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.473022 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.473028 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.473034 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.473040 | orchestrator | 2025-09-27 22:10:27.473046 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-27 22:10:27.473052 | orchestrator | Saturday 27 September 2025 22:02:40 +0000 (0:00:00.824) 0:02:33.401 **** 2025-09-27 22:10:27.473058 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.473064 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.473070 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.473076 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.473082 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.473088 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.473094 | orchestrator | 2025-09-27 22:10:27.473100 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-27 22:10:27.473106 | orchestrator | Saturday 27 September 2025 22:02:41 +0000 (0:00:00.760) 0:02:34.162 **** 2025-09-27 22:10:27.473126 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.473132 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.473138 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.473145 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.473151 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.473157 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.473163 | orchestrator | 2025-09-27 22:10:27.473170 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-27 22:10:27.473176 | orchestrator | Saturday 27 September 2025 22:02:41 +0000 (0:00:00.628) 0:02:34.790 **** 2025-09-27 22:10:27.473182 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.473188 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.473206 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.473212 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.473218 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.473225 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.473231 | orchestrator | 2025-09-27 22:10:27.473237 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-27 22:10:27.473243 | orchestrator | Saturday 27 September 2025 22:02:42 +0000 (0:00:00.548) 0:02:35.339 **** 2025-09-27 22:10:27.473249 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.473255 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.473261 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.473267 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.473274 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.473280 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.473286 | orchestrator | 2025-09-27 22:10:27.473292 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-27 22:10:27.473298 | orchestrator | Saturday 27 September 2025 22:02:43 +0000 (0:00:00.782) 0:02:36.122 **** 2025-09-27 22:10:27.473313 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.473320 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.473326 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.473332 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.473338 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.473344 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.473350 | orchestrator | 2025-09-27 22:10:27.473356 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-27 22:10:27.473362 | orchestrator | Saturday 27 September 2025 22:02:44 +0000 (0:00:00.909) 0:02:37.031 **** 2025-09-27 22:10:27.473368 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.473374 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.473380 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.473386 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.473392 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.473398 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.473404 | orchestrator | 2025-09-27 22:10:27.473411 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-27 22:10:27.473417 | orchestrator | Saturday 27 September 2025 22:02:44 +0000 (0:00:00.776) 0:02:37.808 **** 2025-09-27 22:10:27.473423 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.473429 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.473435 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.473445 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.473454 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.473465 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.473475 | orchestrator | 2025-09-27 22:10:27.473485 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-27 22:10:27.473496 | orchestrator | Saturday 27 September 2025 22:02:45 +0000 (0:00:00.775) 0:02:38.583 **** 2025-09-27 22:10:27.473507 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.473517 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.473527 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.473539 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.473549 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.473561 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.473568 | orchestrator | 2025-09-27 22:10:27.473574 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-27 22:10:27.473580 | orchestrator | Saturday 27 September 2025 22:02:48 +0000 (0:00:03.286) 0:02:41.870 **** 2025-09-27 22:10:27.473586 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.473592 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.473598 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.473605 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.473611 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.473617 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.473623 | orchestrator | 2025-09-27 22:10:27.473629 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-27 22:10:27.473635 | orchestrator | Saturday 27 September 2025 22:02:49 +0000 (0:00:00.630) 0:02:42.501 **** 2025-09-27 22:10:27.473641 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.473647 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.473654 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.473664 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.473670 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.473676 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.473682 | orchestrator | 2025-09-27 22:10:27.473688 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-27 22:10:27.473694 | orchestrator | Saturday 27 September 2025 22:02:50 +0000 (0:00:00.705) 0:02:43.206 **** 2025-09-27 22:10:27.473701 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.473707 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.473718 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.473724 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.473730 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.473736 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.473743 | orchestrator | 2025-09-27 22:10:27.473749 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-27 22:10:27.473755 | orchestrator | Saturday 27 September 2025 22:02:50 +0000 (0:00:00.554) 0:02:43.761 **** 2025-09-27 22:10:27.473761 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.473767 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.473773 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.473780 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-27 22:10:27.473786 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-27 22:10:27.473792 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-27 22:10:27.473799 | orchestrator | 2025-09-27 22:10:27.473805 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-27 22:10:27.473824 | orchestrator | Saturday 27 September 2025 22:02:51 +0000 (0:00:00.892) 0:02:44.654 **** 2025-09-27 22:10:27.473831 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.473837 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.473843 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.473852 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-27 22:10:27.473860 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-27 22:10:27.473868 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.473874 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-27 22:10:27.473881 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-27 22:10:27.473887 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.473893 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-27 22:10:27.473900 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-27 22:10:27.473906 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.473912 | orchestrator | 2025-09-27 22:10:27.473919 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-27 22:10:27.473925 | orchestrator | Saturday 27 September 2025 22:02:52 +0000 (0:00:00.755) 0:02:45.409 **** 2025-09-27 22:10:27.473936 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.473942 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.473948 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.473954 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.473960 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.473966 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.473972 | orchestrator | 2025-09-27 22:10:27.473978 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-27 22:10:27.473985 | orchestrator | Saturday 27 September 2025 22:02:53 +0000 (0:00:00.818) 0:02:46.228 **** 2025-09-27 22:10:27.473991 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.473997 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.474007 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.474013 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.474064 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.474070 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.474076 | orchestrator | 2025-09-27 22:10:27.474083 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-27 22:10:27.474089 | orchestrator | Saturday 27 September 2025 22:02:53 +0000 (0:00:00.549) 0:02:46.778 **** 2025-09-27 22:10:27.474095 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.474102 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.474108 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.474155 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.474161 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.474167 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.474173 | orchestrator | 2025-09-27 22:10:27.474180 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-27 22:10:27.474186 | orchestrator | Saturday 27 September 2025 22:02:54 +0000 (0:00:00.970) 0:02:47.749 **** 2025-09-27 22:10:27.474192 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.474198 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.474204 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.474210 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.474216 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.474222 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.474228 | orchestrator | 2025-09-27 22:10:27.474235 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-27 22:10:27.474241 | orchestrator | Saturday 27 September 2025 22:02:55 +0000 (0:00:00.720) 0:02:48.470 **** 2025-09-27 22:10:27.474247 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.474253 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.474259 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.474278 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.474285 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.474292 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.474298 | orchestrator | 2025-09-27 22:10:27.474304 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-27 22:10:27.474310 | orchestrator | Saturday 27 September 2025 22:02:56 +0000 (0:00:00.878) 0:02:49.349 **** 2025-09-27 22:10:27.474316 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.474322 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.474329 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.474335 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.474341 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.474347 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.474354 | orchestrator | 2025-09-27 22:10:27.474360 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-27 22:10:27.474366 | orchestrator | Saturday 27 September 2025 22:02:57 +0000 (0:00:00.789) 0:02:50.138 **** 2025-09-27 22:10:27.474372 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-27 22:10:27.474386 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-27 22:10:27.474393 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-27 22:10:27.474399 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.474405 | orchestrator | 2025-09-27 22:10:27.474412 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-27 22:10:27.474418 | orchestrator | Saturday 27 September 2025 22:02:57 +0000 (0:00:00.562) 0:02:50.701 **** 2025-09-27 22:10:27.474425 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-27 22:10:27.474431 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-27 22:10:27.474437 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-27 22:10:27.474443 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.474450 | orchestrator | 2025-09-27 22:10:27.474456 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-27 22:10:27.474462 | orchestrator | Saturday 27 September 2025 22:02:58 +0000 (0:00:00.608) 0:02:51.309 **** 2025-09-27 22:10:27.474468 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-27 22:10:27.474474 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-27 22:10:27.474480 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-27 22:10:27.474486 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.474492 | orchestrator | 2025-09-27 22:10:27.474499 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-27 22:10:27.474505 | orchestrator | Saturday 27 September 2025 22:02:58 +0000 (0:00:00.308) 0:02:51.618 **** 2025-09-27 22:10:27.474511 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.474517 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.474523 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.474529 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.474536 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.474542 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.474548 | orchestrator | 2025-09-27 22:10:27.474555 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-27 22:10:27.474561 | orchestrator | Saturday 27 September 2025 22:02:59 +0000 (0:00:00.714) 0:02:52.332 **** 2025-09-27 22:10:27.474567 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-27 22:10:27.474573 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.474579 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-27 22:10:27.474586 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-27 22:10:27.474592 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.474598 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.474604 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-27 22:10:27.474610 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-27 22:10:27.474616 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-27 22:10:27.474622 | orchestrator | 2025-09-27 22:10:27.474629 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-27 22:10:27.474635 | orchestrator | Saturday 27 September 2025 22:03:01 +0000 (0:00:02.474) 0:02:54.807 **** 2025-09-27 22:10:27.474642 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.474648 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.474659 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.474665 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.474671 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.474678 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.474684 | orchestrator | 2025-09-27 22:10:27.474690 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-27 22:10:27.474696 | orchestrator | Saturday 27 September 2025 22:03:04 +0000 (0:00:02.269) 0:02:57.076 **** 2025-09-27 22:10:27.474701 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.474707 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.474712 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.474723 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.474728 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.474734 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.474739 | orchestrator | 2025-09-27 22:10:27.474744 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-27 22:10:27.474750 | orchestrator | Saturday 27 September 2025 22:03:04 +0000 (0:00:00.884) 0:02:57.961 **** 2025-09-27 22:10:27.474755 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.474761 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.474766 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.474771 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:10:27.474777 | orchestrator | 2025-09-27 22:10:27.474782 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-27 22:10:27.474788 | orchestrator | Saturday 27 September 2025 22:03:05 +0000 (0:00:01.000) 0:02:58.961 **** 2025-09-27 22:10:27.474793 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.474798 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.474804 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.474809 | orchestrator | 2025-09-27 22:10:27.474815 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-27 22:10:27.474831 | orchestrator | Saturday 27 September 2025 22:03:06 +0000 (0:00:00.285) 0:02:59.247 **** 2025-09-27 22:10:27.474837 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.474842 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.474848 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.474853 | orchestrator | 2025-09-27 22:10:27.474859 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-27 22:10:27.474864 | orchestrator | Saturday 27 September 2025 22:03:07 +0000 (0:00:01.180) 0:03:00.427 **** 2025-09-27 22:10:27.474870 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-27 22:10:27.474875 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-27 22:10:27.474881 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-27 22:10:27.474886 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.474891 | orchestrator | 2025-09-27 22:10:27.474897 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-27 22:10:27.474902 | orchestrator | Saturday 27 September 2025 22:03:08 +0000 (0:00:00.609) 0:03:01.037 **** 2025-09-27 22:10:27.474907 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.474913 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.474919 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.474924 | orchestrator | 2025-09-27 22:10:27.474929 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-27 22:10:27.474935 | orchestrator | Saturday 27 September 2025 22:03:08 +0000 (0:00:00.546) 0:03:01.584 **** 2025-09-27 22:10:27.474940 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.474945 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.474951 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.474956 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.474962 | orchestrator | 2025-09-27 22:10:27.474967 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-27 22:10:27.474973 | orchestrator | Saturday 27 September 2025 22:03:09 +0000 (0:00:00.766) 0:03:02.350 **** 2025-09-27 22:10:27.474979 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 22:10:27.474984 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 22:10:27.474990 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 22:10:27.474995 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.475000 | orchestrator | 2025-09-27 22:10:27.475006 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-27 22:10:27.475011 | orchestrator | Saturday 27 September 2025 22:03:09 +0000 (0:00:00.655) 0:03:03.006 **** 2025-09-27 22:10:27.475021 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.475027 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.475032 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.475038 | orchestrator | 2025-09-27 22:10:27.475043 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-27 22:10:27.475049 | orchestrator | Saturday 27 September 2025 22:03:10 +0000 (0:00:00.437) 0:03:03.444 **** 2025-09-27 22:10:27.475054 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.475060 | orchestrator | 2025-09-27 22:10:27.475065 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-27 22:10:27.475071 | orchestrator | Saturday 27 September 2025 22:03:10 +0000 (0:00:00.215) 0:03:03.659 **** 2025-09-27 22:10:27.475076 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.475081 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.475087 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.475092 | orchestrator | 2025-09-27 22:10:27.475098 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-27 22:10:27.475103 | orchestrator | Saturday 27 September 2025 22:03:10 +0000 (0:00:00.248) 0:03:03.907 **** 2025-09-27 22:10:27.475122 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.475129 | orchestrator | 2025-09-27 22:10:27.475135 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-27 22:10:27.475142 | orchestrator | Saturday 27 September 2025 22:03:11 +0000 (0:00:00.182) 0:03:04.089 **** 2025-09-27 22:10:27.475147 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.475153 | orchestrator | 2025-09-27 22:10:27.475162 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-27 22:10:27.475168 | orchestrator | Saturday 27 September 2025 22:03:11 +0000 (0:00:00.207) 0:03:04.297 **** 2025-09-27 22:10:27.475173 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.475178 | orchestrator | 2025-09-27 22:10:27.475184 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-27 22:10:27.475190 | orchestrator | Saturday 27 September 2025 22:03:11 +0000 (0:00:00.102) 0:03:04.399 **** 2025-09-27 22:10:27.475195 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.475201 | orchestrator | 2025-09-27 22:10:27.475206 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-27 22:10:27.475212 | orchestrator | Saturday 27 September 2025 22:03:11 +0000 (0:00:00.191) 0:03:04.591 **** 2025-09-27 22:10:27.475218 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.475223 | orchestrator | 2025-09-27 22:10:27.475228 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-27 22:10:27.475234 | orchestrator | Saturday 27 September 2025 22:03:11 +0000 (0:00:00.193) 0:03:04.785 **** 2025-09-27 22:10:27.475239 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 22:10:27.475245 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 22:10:27.475251 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 22:10:27.475256 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.475262 | orchestrator | 2025-09-27 22:10:27.475268 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-27 22:10:27.475273 | orchestrator | Saturday 27 September 2025 22:03:12 +0000 (0:00:00.534) 0:03:05.320 **** 2025-09-27 22:10:27.475279 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.475284 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.475290 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.475295 | orchestrator | 2025-09-27 22:10:27.475311 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-27 22:10:27.475318 | orchestrator | Saturday 27 September 2025 22:03:12 +0000 (0:00:00.490) 0:03:05.811 **** 2025-09-27 22:10:27.475323 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.475329 | orchestrator | 2025-09-27 22:10:27.475334 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-27 22:10:27.475343 | orchestrator | Saturday 27 September 2025 22:03:13 +0000 (0:00:00.271) 0:03:06.082 **** 2025-09-27 22:10:27.475349 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.475354 | orchestrator | 2025-09-27 22:10:27.475360 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-27 22:10:27.475365 | orchestrator | Saturday 27 September 2025 22:03:13 +0000 (0:00:00.197) 0:03:06.280 **** 2025-09-27 22:10:27.475370 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.475376 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.475381 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.475386 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.475392 | orchestrator | 2025-09-27 22:10:27.475397 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-27 22:10:27.475403 | orchestrator | Saturday 27 September 2025 22:03:14 +0000 (0:00:01.003) 0:03:07.283 **** 2025-09-27 22:10:27.475409 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.475414 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.475419 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.475425 | orchestrator | 2025-09-27 22:10:27.475430 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-27 22:10:27.475436 | orchestrator | Saturday 27 September 2025 22:03:14 +0000 (0:00:00.337) 0:03:07.621 **** 2025-09-27 22:10:27.475441 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.475447 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.475452 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.475457 | orchestrator | 2025-09-27 22:10:27.475463 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-27 22:10:27.475468 | orchestrator | Saturday 27 September 2025 22:03:16 +0000 (0:00:01.402) 0:03:09.023 **** 2025-09-27 22:10:27.475474 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 22:10:27.475479 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 22:10:27.475484 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 22:10:27.475490 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.475495 | orchestrator | 2025-09-27 22:10:27.475500 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-27 22:10:27.475506 | orchestrator | Saturday 27 September 2025 22:03:16 +0000 (0:00:00.767) 0:03:09.791 **** 2025-09-27 22:10:27.475511 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.475517 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.475522 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.475528 | orchestrator | 2025-09-27 22:10:27.475533 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-27 22:10:27.475538 | orchestrator | Saturday 27 September 2025 22:03:17 +0000 (0:00:00.295) 0:03:10.086 **** 2025-09-27 22:10:27.475544 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.475549 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.475555 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.475560 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.475566 | orchestrator | 2025-09-27 22:10:27.475571 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-27 22:10:27.475577 | orchestrator | Saturday 27 September 2025 22:03:17 +0000 (0:00:00.858) 0:03:10.945 **** 2025-09-27 22:10:27.475582 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.475588 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.475593 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.475598 | orchestrator | 2025-09-27 22:10:27.475604 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-27 22:10:27.475609 | orchestrator | Saturday 27 September 2025 22:03:18 +0000 (0:00:00.259) 0:03:11.204 **** 2025-09-27 22:10:27.475615 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.475633 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.475639 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.475644 | orchestrator | 2025-09-27 22:10:27.475650 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-27 22:10:27.475655 | orchestrator | Saturday 27 September 2025 22:03:19 +0000 (0:00:01.296) 0:03:12.501 **** 2025-09-27 22:10:27.475661 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 22:10:27.475666 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 22:10:27.475672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 22:10:27.475677 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.475682 | orchestrator | 2025-09-27 22:10:27.475688 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-27 22:10:27.475693 | orchestrator | Saturday 27 September 2025 22:03:19 +0000 (0:00:00.466) 0:03:12.967 **** 2025-09-27 22:10:27.475699 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.475704 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.475709 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.475715 | orchestrator | 2025-09-27 22:10:27.475720 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-27 22:10:27.475726 | orchestrator | Saturday 27 September 2025 22:03:20 +0000 (0:00:00.389) 0:03:13.357 **** 2025-09-27 22:10:27.475732 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.475737 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.475743 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.475748 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.475754 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.475759 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.475764 | orchestrator | 2025-09-27 22:10:27.475769 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-27 22:10:27.475775 | orchestrator | Saturday 27 September 2025 22:03:20 +0000 (0:00:00.615) 0:03:13.972 **** 2025-09-27 22:10:27.475793 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.475799 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.475804 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.475809 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:10:27.475815 | orchestrator | 2025-09-27 22:10:27.475820 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-27 22:10:27.475825 | orchestrator | Saturday 27 September 2025 22:03:21 +0000 (0:00:00.905) 0:03:14.877 **** 2025-09-27 22:10:27.475831 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.475836 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.475842 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.475848 | orchestrator | 2025-09-27 22:10:27.475857 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-27 22:10:27.475867 | orchestrator | Saturday 27 September 2025 22:03:22 +0000 (0:00:00.311) 0:03:15.189 **** 2025-09-27 22:10:27.475877 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.475885 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.475894 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.475903 | orchestrator | 2025-09-27 22:10:27.475913 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-27 22:10:27.475924 | orchestrator | Saturday 27 September 2025 22:03:23 +0000 (0:00:01.315) 0:03:16.504 **** 2025-09-27 22:10:27.475930 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-27 22:10:27.475935 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-27 22:10:27.475940 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-27 22:10:27.475946 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.475951 | orchestrator | 2025-09-27 22:10:27.475957 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-27 22:10:27.475962 | orchestrator | Saturday 27 September 2025 22:03:24 +0000 (0:00:00.596) 0:03:17.101 **** 2025-09-27 22:10:27.475973 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.475979 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.475984 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.475989 | orchestrator | 2025-09-27 22:10:27.475995 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-27 22:10:27.476000 | orchestrator | 2025-09-27 22:10:27.476005 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-27 22:10:27.476011 | orchestrator | Saturday 27 September 2025 22:03:24 +0000 (0:00:00.711) 0:03:17.813 **** 2025-09-27 22:10:27.476016 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:10:27.476022 | orchestrator | 2025-09-27 22:10:27.476027 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-27 22:10:27.476032 | orchestrator | Saturday 27 September 2025 22:03:25 +0000 (0:00:01.008) 0:03:18.821 **** 2025-09-27 22:10:27.476037 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:10:27.476043 | orchestrator | 2025-09-27 22:10:27.476048 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-27 22:10:27.476053 | orchestrator | Saturday 27 September 2025 22:03:26 +0000 (0:00:00.529) 0:03:19.350 **** 2025-09-27 22:10:27.476059 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.476064 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.476069 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.476075 | orchestrator | 2025-09-27 22:10:27.476080 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-27 22:10:27.476086 | orchestrator | Saturday 27 September 2025 22:03:27 +0000 (0:00:00.747) 0:03:20.098 **** 2025-09-27 22:10:27.476091 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.476096 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.476102 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.476107 | orchestrator | 2025-09-27 22:10:27.476125 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-27 22:10:27.476131 | orchestrator | Saturday 27 September 2025 22:03:27 +0000 (0:00:00.635) 0:03:20.734 **** 2025-09-27 22:10:27.476137 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.476146 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.476152 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.476158 | orchestrator | 2025-09-27 22:10:27.476163 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-27 22:10:27.476169 | orchestrator | Saturday 27 September 2025 22:03:28 +0000 (0:00:00.500) 0:03:21.234 **** 2025-09-27 22:10:27.476174 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.476180 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.476185 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.476190 | orchestrator | 2025-09-27 22:10:27.476196 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-27 22:10:27.476201 | orchestrator | Saturday 27 September 2025 22:03:28 +0000 (0:00:00.505) 0:03:21.740 **** 2025-09-27 22:10:27.476207 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.476212 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.476218 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.476223 | orchestrator | 2025-09-27 22:10:27.476228 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-27 22:10:27.476234 | orchestrator | Saturday 27 September 2025 22:03:29 +0000 (0:00:00.899) 0:03:22.640 **** 2025-09-27 22:10:27.476239 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.476244 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.476250 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.476256 | orchestrator | 2025-09-27 22:10:27.476261 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-27 22:10:27.476268 | orchestrator | Saturday 27 September 2025 22:03:30 +0000 (0:00:00.899) 0:03:23.539 **** 2025-09-27 22:10:27.476278 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.476283 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.476288 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.476294 | orchestrator | 2025-09-27 22:10:27.476299 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-27 22:10:27.476316 | orchestrator | Saturday 27 September 2025 22:03:31 +0000 (0:00:00.652) 0:03:24.192 **** 2025-09-27 22:10:27.476322 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.476327 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.476332 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.476338 | orchestrator | 2025-09-27 22:10:27.476343 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-27 22:10:27.476348 | orchestrator | Saturday 27 September 2025 22:03:32 +0000 (0:00:01.096) 0:03:25.288 **** 2025-09-27 22:10:27.476354 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.476359 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.476364 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.476370 | orchestrator | 2025-09-27 22:10:27.476375 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-27 22:10:27.476381 | orchestrator | Saturday 27 September 2025 22:03:33 +0000 (0:00:01.584) 0:03:26.873 **** 2025-09-27 22:10:27.476386 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.476392 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.476397 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.476402 | orchestrator | 2025-09-27 22:10:27.476408 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-27 22:10:27.476413 | orchestrator | Saturday 27 September 2025 22:03:34 +0000 (0:00:00.713) 0:03:27.587 **** 2025-09-27 22:10:27.476418 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.476424 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.476429 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.476435 | orchestrator | 2025-09-27 22:10:27.476440 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-27 22:10:27.476445 | orchestrator | Saturday 27 September 2025 22:03:34 +0000 (0:00:00.341) 0:03:27.928 **** 2025-09-27 22:10:27.476451 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.476456 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.476462 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.476467 | orchestrator | 2025-09-27 22:10:27.476473 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-27 22:10:27.476478 | orchestrator | Saturday 27 September 2025 22:03:35 +0000 (0:00:00.367) 0:03:28.296 **** 2025-09-27 22:10:27.476484 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.476489 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.476495 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.476500 | orchestrator | 2025-09-27 22:10:27.476505 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-27 22:10:27.476511 | orchestrator | Saturday 27 September 2025 22:03:35 +0000 (0:00:00.326) 0:03:28.622 **** 2025-09-27 22:10:27.476517 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.476522 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.476527 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.476533 | orchestrator | 2025-09-27 22:10:27.476538 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-27 22:10:27.476543 | orchestrator | Saturday 27 September 2025 22:03:36 +0000 (0:00:00.426) 0:03:29.049 **** 2025-09-27 22:10:27.476549 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.476554 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.476559 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.476565 | orchestrator | 2025-09-27 22:10:27.476570 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-27 22:10:27.476576 | orchestrator | Saturday 27 September 2025 22:03:36 +0000 (0:00:00.515) 0:03:29.564 **** 2025-09-27 22:10:27.476581 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.476587 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.476597 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.476602 | orchestrator | 2025-09-27 22:10:27.476608 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-27 22:10:27.476613 | orchestrator | Saturday 27 September 2025 22:03:36 +0000 (0:00:00.368) 0:03:29.933 **** 2025-09-27 22:10:27.476619 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.476624 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.476630 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.476635 | orchestrator | 2025-09-27 22:10:27.476640 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-27 22:10:27.476646 | orchestrator | Saturday 27 September 2025 22:03:37 +0000 (0:00:00.363) 0:03:30.297 **** 2025-09-27 22:10:27.476651 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.476656 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.476666 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.476671 | orchestrator | 2025-09-27 22:10:27.476677 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-27 22:10:27.476682 | orchestrator | Saturday 27 September 2025 22:03:37 +0000 (0:00:00.440) 0:03:30.738 **** 2025-09-27 22:10:27.476688 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.476693 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.476698 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.476703 | orchestrator | 2025-09-27 22:10:27.476709 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-27 22:10:27.476714 | orchestrator | Saturday 27 September 2025 22:03:38 +0000 (0:00:00.760) 0:03:31.498 **** 2025-09-27 22:10:27.476719 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.476725 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.476730 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.476735 | orchestrator | 2025-09-27 22:10:27.476741 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-27 22:10:27.476746 | orchestrator | Saturday 27 September 2025 22:03:38 +0000 (0:00:00.299) 0:03:31.797 **** 2025-09-27 22:10:27.476752 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:10:27.476757 | orchestrator | 2025-09-27 22:10:27.476763 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-27 22:10:27.476768 | orchestrator | Saturday 27 September 2025 22:03:39 +0000 (0:00:00.680) 0:03:32.478 **** 2025-09-27 22:10:27.476774 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.476779 | orchestrator | 2025-09-27 22:10:27.476784 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-27 22:10:27.476790 | orchestrator | Saturday 27 September 2025 22:03:39 +0000 (0:00:00.181) 0:03:32.660 **** 2025-09-27 22:10:27.476795 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-27 22:10:27.476800 | orchestrator | 2025-09-27 22:10:27.476817 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-27 22:10:27.476823 | orchestrator | Saturday 27 September 2025 22:03:40 +0000 (0:00:00.894) 0:03:33.554 **** 2025-09-27 22:10:27.476828 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.476833 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.476839 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.476844 | orchestrator | 2025-09-27 22:10:27.476850 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-27 22:10:27.476855 | orchestrator | Saturday 27 September 2025 22:03:40 +0000 (0:00:00.277) 0:03:33.832 **** 2025-09-27 22:10:27.476860 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.476866 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.476871 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.476876 | orchestrator | 2025-09-27 22:10:27.476882 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-27 22:10:27.476887 | orchestrator | Saturday 27 September 2025 22:03:41 +0000 (0:00:00.250) 0:03:34.083 **** 2025-09-27 22:10:27.476893 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.476898 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.476909 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.476914 | orchestrator | 2025-09-27 22:10:27.476919 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-27 22:10:27.476925 | orchestrator | Saturday 27 September 2025 22:03:42 +0000 (0:00:01.167) 0:03:35.251 **** 2025-09-27 22:10:27.476930 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.476935 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.476941 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.476946 | orchestrator | 2025-09-27 22:10:27.476952 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-27 22:10:27.476958 | orchestrator | Saturday 27 September 2025 22:03:43 +0000 (0:00:01.082) 0:03:36.333 **** 2025-09-27 22:10:27.476963 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.476968 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.476974 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.476979 | orchestrator | 2025-09-27 22:10:27.476984 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-27 22:10:27.476990 | orchestrator | Saturday 27 September 2025 22:03:44 +0000 (0:00:01.564) 0:03:37.897 **** 2025-09-27 22:10:27.476995 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.477001 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.477006 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.477011 | orchestrator | 2025-09-27 22:10:27.477017 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-27 22:10:27.477022 | orchestrator | Saturday 27 September 2025 22:03:45 +0000 (0:00:00.908) 0:03:38.806 **** 2025-09-27 22:10:27.477027 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.477033 | orchestrator | 2025-09-27 22:10:27.477038 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-27 22:10:27.477043 | orchestrator | Saturday 27 September 2025 22:03:47 +0000 (0:00:01.229) 0:03:40.036 **** 2025-09-27 22:10:27.477049 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.477054 | orchestrator | 2025-09-27 22:10:27.477060 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-27 22:10:27.477065 | orchestrator | Saturday 27 September 2025 22:03:47 +0000 (0:00:00.568) 0:03:40.604 **** 2025-09-27 22:10:27.477070 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-27 22:10:27.477076 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:10:27.477081 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:10:27.477087 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-27 22:10:27.477092 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-09-27 22:10:27.477097 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-27 22:10:27.477103 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-27 22:10:27.477108 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-27 22:10:27.477124 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-27 22:10:27.477130 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-09-27 22:10:27.477139 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-27 22:10:27.477145 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-27 22:10:27.477150 | orchestrator | 2025-09-27 22:10:27.477156 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-27 22:10:27.477161 | orchestrator | Saturday 27 September 2025 22:03:51 +0000 (0:00:03.556) 0:03:44.161 **** 2025-09-27 22:10:27.477167 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.477172 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.477178 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.477183 | orchestrator | 2025-09-27 22:10:27.477188 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-27 22:10:27.477194 | orchestrator | Saturday 27 September 2025 22:03:52 +0000 (0:00:01.236) 0:03:45.397 **** 2025-09-27 22:10:27.477204 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.477210 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.477215 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.477221 | orchestrator | 2025-09-27 22:10:27.477226 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-27 22:10:27.477231 | orchestrator | Saturday 27 September 2025 22:03:52 +0000 (0:00:00.232) 0:03:45.630 **** 2025-09-27 22:10:27.477237 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.477242 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.477248 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.477253 | orchestrator | 2025-09-27 22:10:27.477259 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-27 22:10:27.477265 | orchestrator | Saturday 27 September 2025 22:03:52 +0000 (0:00:00.248) 0:03:45.879 **** 2025-09-27 22:10:27.477270 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.477276 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.477281 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.477286 | orchestrator | 2025-09-27 22:10:27.477292 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-27 22:10:27.477308 | orchestrator | Saturday 27 September 2025 22:03:54 +0000 (0:00:01.707) 0:03:47.586 **** 2025-09-27 22:10:27.477314 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.477319 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.477325 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.477330 | orchestrator | 2025-09-27 22:10:27.477336 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-27 22:10:27.477341 | orchestrator | Saturday 27 September 2025 22:03:55 +0000 (0:00:01.336) 0:03:48.923 **** 2025-09-27 22:10:27.477346 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.477352 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.477357 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.477362 | orchestrator | 2025-09-27 22:10:27.477368 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-27 22:10:27.477373 | orchestrator | Saturday 27 September 2025 22:03:56 +0000 (0:00:00.313) 0:03:49.237 **** 2025-09-27 22:10:27.477378 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:10:27.477384 | orchestrator | 2025-09-27 22:10:27.477389 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-27 22:10:27.477395 | orchestrator | Saturday 27 September 2025 22:03:56 +0000 (0:00:00.507) 0:03:49.744 **** 2025-09-27 22:10:27.477400 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.477405 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.477411 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.477416 | orchestrator | 2025-09-27 22:10:27.477421 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-27 22:10:27.477427 | orchestrator | Saturday 27 September 2025 22:03:57 +0000 (0:00:00.624) 0:03:50.369 **** 2025-09-27 22:10:27.477432 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.477438 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.477443 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.477449 | orchestrator | 2025-09-27 22:10:27.477454 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-27 22:10:27.477460 | orchestrator | Saturday 27 September 2025 22:03:57 +0000 (0:00:00.299) 0:03:50.668 **** 2025-09-27 22:10:27.477465 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:10:27.477470 | orchestrator | 2025-09-27 22:10:27.477476 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-27 22:10:27.477481 | orchestrator | Saturday 27 September 2025 22:03:58 +0000 (0:00:00.515) 0:03:51.184 **** 2025-09-27 22:10:27.477487 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.477492 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.477502 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.477507 | orchestrator | 2025-09-27 22:10:27.477512 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-27 22:10:27.477518 | orchestrator | Saturday 27 September 2025 22:04:00 +0000 (0:00:02.070) 0:03:53.254 **** 2025-09-27 22:10:27.477523 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.477529 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.477534 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.477540 | orchestrator | 2025-09-27 22:10:27.477545 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-27 22:10:27.477550 | orchestrator | Saturday 27 September 2025 22:04:01 +0000 (0:00:01.062) 0:03:54.317 **** 2025-09-27 22:10:27.477556 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.477561 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.477567 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.477572 | orchestrator | 2025-09-27 22:10:27.477577 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-27 22:10:27.477583 | orchestrator | Saturday 27 September 2025 22:04:02 +0000 (0:00:01.454) 0:03:55.772 **** 2025-09-27 22:10:27.477588 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.477593 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.477599 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.477605 | orchestrator | 2025-09-27 22:10:27.477610 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-27 22:10:27.477615 | orchestrator | Saturday 27 September 2025 22:04:04 +0000 (0:00:01.771) 0:03:57.543 **** 2025-09-27 22:10:27.477624 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:10:27.477630 | orchestrator | 2025-09-27 22:10:27.477635 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-27 22:10:27.477641 | orchestrator | Saturday 27 September 2025 22:04:05 +0000 (0:00:00.800) 0:03:58.344 **** 2025-09-27 22:10:27.477646 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.477651 | orchestrator | 2025-09-27 22:10:27.477657 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-27 22:10:27.477662 | orchestrator | Saturday 27 September 2025 22:04:06 +0000 (0:00:01.168) 0:03:59.512 **** 2025-09-27 22:10:27.477668 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.477673 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.477679 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.477684 | orchestrator | 2025-09-27 22:10:27.477689 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-27 22:10:27.477695 | orchestrator | Saturday 27 September 2025 22:04:16 +0000 (0:00:09.517) 0:04:09.030 **** 2025-09-27 22:10:27.477700 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.477705 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.477711 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.477716 | orchestrator | 2025-09-27 22:10:27.477721 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-27 22:10:27.477727 | orchestrator | Saturday 27 September 2025 22:04:16 +0000 (0:00:00.307) 0:04:09.337 **** 2025-09-27 22:10:27.477746 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a0ed480b9c0e18969c4c3a6b02b30e8fa4448a2'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-27 22:10:27.477755 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a0ed480b9c0e18969c4c3a6b02b30e8fa4448a2'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-27 22:10:27.477762 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a0ed480b9c0e18969c4c3a6b02b30e8fa4448a2'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-27 22:10:27.477776 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a0ed480b9c0e18969c4c3a6b02b30e8fa4448a2'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-27 22:10:27.477782 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a0ed480b9c0e18969c4c3a6b02b30e8fa4448a2'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-27 22:10:27.477789 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a0ed480b9c0e18969c4c3a6b02b30e8fa4448a2'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__2a0ed480b9c0e18969c4c3a6b02b30e8fa4448a2'}])  2025-09-27 22:10:27.477796 | orchestrator | 2025-09-27 22:10:27.477802 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-27 22:10:27.477807 | orchestrator | Saturday 27 September 2025 22:04:31 +0000 (0:00:14.807) 0:04:24.144 **** 2025-09-27 22:10:27.477812 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.477818 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.477823 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.477828 | orchestrator | 2025-09-27 22:10:27.477834 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-27 22:10:27.477839 | orchestrator | Saturday 27 September 2025 22:04:31 +0000 (0:00:00.402) 0:04:24.547 **** 2025-09-27 22:10:27.477845 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:10:27.477850 | orchestrator | 2025-09-27 22:10:27.477855 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-27 22:10:27.477861 | orchestrator | Saturday 27 September 2025 22:04:32 +0000 (0:00:00.696) 0:04:25.244 **** 2025-09-27 22:10:27.477866 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.477876 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.477882 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.477887 | orchestrator | 2025-09-27 22:10:27.477892 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-27 22:10:27.477898 | orchestrator | Saturday 27 September 2025 22:04:32 +0000 (0:00:00.311) 0:04:25.555 **** 2025-09-27 22:10:27.477903 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.477908 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.477914 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.477919 | orchestrator | 2025-09-27 22:10:27.477924 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-27 22:10:27.477930 | orchestrator | Saturday 27 September 2025 22:04:32 +0000 (0:00:00.312) 0:04:25.867 **** 2025-09-27 22:10:27.477935 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-27 22:10:27.477941 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-27 22:10:27.477946 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-27 22:10:27.477952 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.477957 | orchestrator | 2025-09-27 22:10:27.477962 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-27 22:10:27.477973 | orchestrator | Saturday 27 September 2025 22:04:33 +0000 (0:00:00.766) 0:04:26.634 **** 2025-09-27 22:10:27.477978 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.477984 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.477989 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.477994 | orchestrator | 2025-09-27 22:10:27.478000 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-27 22:10:27.478005 | orchestrator | 2025-09-27 22:10:27.478011 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-27 22:10:27.478040 | orchestrator | Saturday 27 September 2025 22:04:34 +0000 (0:00:00.629) 0:04:27.264 **** 2025-09-27 22:10:27.478058 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:10:27.478064 | orchestrator | 2025-09-27 22:10:27.478070 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-27 22:10:27.478076 | orchestrator | Saturday 27 September 2025 22:04:34 +0000 (0:00:00.451) 0:04:27.716 **** 2025-09-27 22:10:27.478081 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:10:27.478087 | orchestrator | 2025-09-27 22:10:27.478092 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-27 22:10:27.478097 | orchestrator | Saturday 27 September 2025 22:04:35 +0000 (0:00:00.610) 0:04:28.327 **** 2025-09-27 22:10:27.478103 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.478108 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.478158 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.478167 | orchestrator | 2025-09-27 22:10:27.478175 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-27 22:10:27.478184 | orchestrator | Saturday 27 September 2025 22:04:35 +0000 (0:00:00.613) 0:04:28.940 **** 2025-09-27 22:10:27.478193 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.478201 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.478212 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.478221 | orchestrator | 2025-09-27 22:10:27.478230 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-27 22:10:27.478239 | orchestrator | Saturday 27 September 2025 22:04:36 +0000 (0:00:00.265) 0:04:29.206 **** 2025-09-27 22:10:27.478248 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.478257 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.478265 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.478272 | orchestrator | 2025-09-27 22:10:27.478281 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-27 22:10:27.478290 | orchestrator | Saturday 27 September 2025 22:04:36 +0000 (0:00:00.275) 0:04:29.481 **** 2025-09-27 22:10:27.478298 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.478307 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.478315 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.478324 | orchestrator | 2025-09-27 22:10:27.478333 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-27 22:10:27.478341 | orchestrator | Saturday 27 September 2025 22:04:36 +0000 (0:00:00.412) 0:04:29.894 **** 2025-09-27 22:10:27.478353 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.478360 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.478366 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.478371 | orchestrator | 2025-09-27 22:10:27.478377 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-27 22:10:27.478382 | orchestrator | Saturday 27 September 2025 22:04:37 +0000 (0:00:00.625) 0:04:30.519 **** 2025-09-27 22:10:27.478388 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.478393 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.478399 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.478404 | orchestrator | 2025-09-27 22:10:27.478409 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-27 22:10:27.478423 | orchestrator | Saturday 27 September 2025 22:04:37 +0000 (0:00:00.303) 0:04:30.823 **** 2025-09-27 22:10:27.478429 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.478434 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.478439 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.478445 | orchestrator | 2025-09-27 22:10:27.478450 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-27 22:10:27.478455 | orchestrator | Saturday 27 September 2025 22:04:38 +0000 (0:00:00.264) 0:04:31.087 **** 2025-09-27 22:10:27.478461 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.478466 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.478472 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.478477 | orchestrator | 2025-09-27 22:10:27.478483 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-27 22:10:27.478488 | orchestrator | Saturday 27 September 2025 22:04:38 +0000 (0:00:00.560) 0:04:31.648 **** 2025-09-27 22:10:27.478493 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.478499 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.478505 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.478514 | orchestrator | 2025-09-27 22:10:27.478528 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-27 22:10:27.478538 | orchestrator | Saturday 27 September 2025 22:04:39 +0000 (0:00:00.916) 0:04:32.564 **** 2025-09-27 22:10:27.478547 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.478556 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.478566 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.478572 | orchestrator | 2025-09-27 22:10:27.478578 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-27 22:10:27.478584 | orchestrator | Saturday 27 September 2025 22:04:39 +0000 (0:00:00.227) 0:04:32.792 **** 2025-09-27 22:10:27.478589 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.478595 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.478600 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.478606 | orchestrator | 2025-09-27 22:10:27.478611 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-27 22:10:27.478617 | orchestrator | Saturday 27 September 2025 22:04:40 +0000 (0:00:00.248) 0:04:33.040 **** 2025-09-27 22:10:27.478622 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.478627 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.478633 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.478638 | orchestrator | 2025-09-27 22:10:27.478643 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-27 22:10:27.478648 | orchestrator | Saturday 27 September 2025 22:04:40 +0000 (0:00:00.222) 0:04:33.263 **** 2025-09-27 22:10:27.478652 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.478657 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.478662 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.478667 | orchestrator | 2025-09-27 22:10:27.478671 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-27 22:10:27.478676 | orchestrator | Saturday 27 September 2025 22:04:40 +0000 (0:00:00.438) 0:04:33.701 **** 2025-09-27 22:10:27.478696 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.478701 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.478706 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.478711 | orchestrator | 2025-09-27 22:10:27.478716 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-27 22:10:27.478720 | orchestrator | Saturday 27 September 2025 22:04:40 +0000 (0:00:00.258) 0:04:33.960 **** 2025-09-27 22:10:27.478725 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.478730 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.478735 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.478740 | orchestrator | 2025-09-27 22:10:27.478744 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-27 22:10:27.478749 | orchestrator | Saturday 27 September 2025 22:04:41 +0000 (0:00:00.267) 0:04:34.227 **** 2025-09-27 22:10:27.478759 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.478764 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.478768 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.478774 | orchestrator | 2025-09-27 22:10:27.478778 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-27 22:10:27.478783 | orchestrator | Saturday 27 September 2025 22:04:41 +0000 (0:00:00.247) 0:04:34.475 **** 2025-09-27 22:10:27.478788 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.478792 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.478797 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.478802 | orchestrator | 2025-09-27 22:10:27.478806 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-27 22:10:27.478811 | orchestrator | Saturday 27 September 2025 22:04:41 +0000 (0:00:00.444) 0:04:34.919 **** 2025-09-27 22:10:27.478816 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.478821 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.478825 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.478830 | orchestrator | 2025-09-27 22:10:27.478835 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-27 22:10:27.478839 | orchestrator | Saturday 27 September 2025 22:04:42 +0000 (0:00:00.306) 0:04:35.226 **** 2025-09-27 22:10:27.478844 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.478849 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.478854 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.478858 | orchestrator | 2025-09-27 22:10:27.478863 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-27 22:10:27.478868 | orchestrator | Saturday 27 September 2025 22:04:42 +0000 (0:00:00.526) 0:04:35.753 **** 2025-09-27 22:10:27.478873 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-27 22:10:27.478878 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-27 22:10:27.478883 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-27 22:10:27.478888 | orchestrator | 2025-09-27 22:10:27.478893 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-27 22:10:27.478897 | orchestrator | Saturday 27 September 2025 22:04:43 +0000 (0:00:00.872) 0:04:36.625 **** 2025-09-27 22:10:27.478902 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:10:27.478907 | orchestrator | 2025-09-27 22:10:27.478912 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-27 22:10:27.478917 | orchestrator | Saturday 27 September 2025 22:04:44 +0000 (0:00:00.744) 0:04:37.370 **** 2025-09-27 22:10:27.478922 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.478927 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.478931 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.478936 | orchestrator | 2025-09-27 22:10:27.478941 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-27 22:10:27.478946 | orchestrator | Saturday 27 September 2025 22:04:44 +0000 (0:00:00.641) 0:04:38.012 **** 2025-09-27 22:10:27.478951 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.478956 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.478960 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.478965 | orchestrator | 2025-09-27 22:10:27.478970 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-27 22:10:27.478975 | orchestrator | Saturday 27 September 2025 22:04:45 +0000 (0:00:00.376) 0:04:38.388 **** 2025-09-27 22:10:27.478983 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-27 22:10:27.478988 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-27 22:10:27.478993 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-27 22:10:27.478997 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-27 22:10:27.479002 | orchestrator | 2025-09-27 22:10:27.479007 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-27 22:10:27.479016 | orchestrator | Saturday 27 September 2025 22:04:55 +0000 (0:00:10.176) 0:04:48.564 **** 2025-09-27 22:10:27.479021 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.479025 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.479030 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.479035 | orchestrator | 2025-09-27 22:10:27.479040 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-27 22:10:27.479045 | orchestrator | Saturday 27 September 2025 22:04:56 +0000 (0:00:00.593) 0:04:49.158 **** 2025-09-27 22:10:27.479050 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-27 22:10:27.479054 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-27 22:10:27.479059 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-27 22:10:27.479064 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:10:27.479068 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-27 22:10:27.479073 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:10:27.479078 | orchestrator | 2025-09-27 22:10:27.479083 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-27 22:10:27.479088 | orchestrator | Saturday 27 September 2025 22:04:58 +0000 (0:00:02.000) 0:04:51.158 **** 2025-09-27 22:10:27.479092 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-27 22:10:27.479097 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-27 22:10:27.479126 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-27 22:10:27.479131 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-27 22:10:27.479136 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-27 22:10:27.479141 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-27 22:10:27.479214 | orchestrator | 2025-09-27 22:10:27.479223 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-27 22:10:27.479228 | orchestrator | Saturday 27 September 2025 22:04:59 +0000 (0:00:01.293) 0:04:52.451 **** 2025-09-27 22:10:27.479233 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.479238 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.479242 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.479247 | orchestrator | 2025-09-27 22:10:27.479252 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-27 22:10:27.479257 | orchestrator | Saturday 27 September 2025 22:05:00 +0000 (0:00:00.687) 0:04:53.139 **** 2025-09-27 22:10:27.479262 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.479267 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.479272 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.479277 | orchestrator | 2025-09-27 22:10:27.479282 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-27 22:10:27.479287 | orchestrator | Saturday 27 September 2025 22:05:00 +0000 (0:00:00.524) 0:04:53.664 **** 2025-09-27 22:10:27.479292 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.479297 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.479302 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.479310 | orchestrator | 2025-09-27 22:10:27.479317 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-27 22:10:27.479327 | orchestrator | Saturday 27 September 2025 22:05:00 +0000 (0:00:00.300) 0:04:53.965 **** 2025-09-27 22:10:27.479340 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:10:27.479347 | orchestrator | 2025-09-27 22:10:27.479354 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-27 22:10:27.479362 | orchestrator | Saturday 27 September 2025 22:05:01 +0000 (0:00:00.504) 0:04:54.469 **** 2025-09-27 22:10:27.479369 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.479377 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.479384 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.479391 | orchestrator | 2025-09-27 22:10:27.479398 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-27 22:10:27.479412 | orchestrator | Saturday 27 September 2025 22:05:02 +0000 (0:00:00.586) 0:04:55.055 **** 2025-09-27 22:10:27.479420 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.479427 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.479433 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.479440 | orchestrator | 2025-09-27 22:10:27.479447 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-27 22:10:27.479455 | orchestrator | Saturday 27 September 2025 22:05:02 +0000 (0:00:00.327) 0:04:55.382 **** 2025-09-27 22:10:27.479462 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:10:27.479469 | orchestrator | 2025-09-27 22:10:27.479476 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-27 22:10:27.479484 | orchestrator | Saturday 27 September 2025 22:05:02 +0000 (0:00:00.539) 0:04:55.922 **** 2025-09-27 22:10:27.479491 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.479499 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.479507 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.479514 | orchestrator | 2025-09-27 22:10:27.479521 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-27 22:10:27.479529 | orchestrator | Saturday 27 September 2025 22:05:04 +0000 (0:00:01.527) 0:04:57.449 **** 2025-09-27 22:10:27.479537 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.479544 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.479551 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.479558 | orchestrator | 2025-09-27 22:10:27.479567 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-27 22:10:27.479582 | orchestrator | Saturday 27 September 2025 22:05:05 +0000 (0:00:01.065) 0:04:58.515 **** 2025-09-27 22:10:27.479589 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.479596 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.479604 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.479612 | orchestrator | 2025-09-27 22:10:27.479619 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-27 22:10:27.479628 | orchestrator | Saturday 27 September 2025 22:05:07 +0000 (0:00:01.623) 0:05:00.139 **** 2025-09-27 22:10:27.479636 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.479643 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.479651 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.479659 | orchestrator | 2025-09-27 22:10:27.479667 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-27 22:10:27.479675 | orchestrator | Saturday 27 September 2025 22:05:09 +0000 (0:00:02.564) 0:05:02.703 **** 2025-09-27 22:10:27.479684 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.479689 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.479694 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-27 22:10:27.479699 | orchestrator | 2025-09-27 22:10:27.479703 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-27 22:10:27.479708 | orchestrator | Saturday 27 September 2025 22:05:10 +0000 (0:00:00.688) 0:05:03.392 **** 2025-09-27 22:10:27.479713 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-27 22:10:27.479718 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-27 22:10:27.479723 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-27 22:10:27.479745 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-27 22:10:27.479751 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-27 22:10:27.479755 | orchestrator | 2025-09-27 22:10:27.479760 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-27 22:10:27.479771 | orchestrator | Saturday 27 September 2025 22:05:34 +0000 (0:00:24.254) 0:05:27.647 **** 2025-09-27 22:10:27.479776 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-27 22:10:27.479781 | orchestrator | 2025-09-27 22:10:27.479786 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-27 22:10:27.479791 | orchestrator | Saturday 27 September 2025 22:05:35 +0000 (0:00:01.335) 0:05:28.983 **** 2025-09-27 22:10:27.479795 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.479800 | orchestrator | 2025-09-27 22:10:27.479805 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-27 22:10:27.479810 | orchestrator | Saturday 27 September 2025 22:05:36 +0000 (0:00:00.296) 0:05:29.279 **** 2025-09-27 22:10:27.479815 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.479820 | orchestrator | 2025-09-27 22:10:27.479825 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-27 22:10:27.479830 | orchestrator | Saturday 27 September 2025 22:05:36 +0000 (0:00:00.153) 0:05:29.433 **** 2025-09-27 22:10:27.479834 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-27 22:10:27.479839 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-27 22:10:27.479844 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-27 22:10:27.479849 | orchestrator | 2025-09-27 22:10:27.479854 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-27 22:10:27.479859 | orchestrator | Saturday 27 September 2025 22:05:42 +0000 (0:00:06.431) 0:05:35.864 **** 2025-09-27 22:10:27.479864 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-27 22:10:27.479868 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-27 22:10:27.479873 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-27 22:10:27.479878 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-27 22:10:27.479883 | orchestrator | 2025-09-27 22:10:27.479888 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-27 22:10:27.479893 | orchestrator | Saturday 27 September 2025 22:05:48 +0000 (0:00:05.240) 0:05:41.105 **** 2025-09-27 22:10:27.479897 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.479902 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.479907 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.479912 | orchestrator | 2025-09-27 22:10:27.479916 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-27 22:10:27.479921 | orchestrator | Saturday 27 September 2025 22:05:48 +0000 (0:00:00.681) 0:05:41.786 **** 2025-09-27 22:10:27.479926 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:10:27.479931 | orchestrator | 2025-09-27 22:10:27.479935 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-27 22:10:27.479940 | orchestrator | Saturday 27 September 2025 22:05:49 +0000 (0:00:00.547) 0:05:42.334 **** 2025-09-27 22:10:27.479945 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.479950 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.479954 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.479959 | orchestrator | 2025-09-27 22:10:27.479964 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-27 22:10:27.479969 | orchestrator | Saturday 27 September 2025 22:05:49 +0000 (0:00:00.371) 0:05:42.705 **** 2025-09-27 22:10:27.479974 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.479978 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.479983 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.479988 | orchestrator | 2025-09-27 22:10:27.479993 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-27 22:10:27.480002 | orchestrator | Saturday 27 September 2025 22:05:51 +0000 (0:00:01.539) 0:05:44.245 **** 2025-09-27 22:10:27.480010 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-27 22:10:27.480015 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-27 22:10:27.480020 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-27 22:10:27.480025 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.480030 | orchestrator | 2025-09-27 22:10:27.480035 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-27 22:10:27.480040 | orchestrator | Saturday 27 September 2025 22:05:51 +0000 (0:00:00.618) 0:05:44.863 **** 2025-09-27 22:10:27.480044 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.480049 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.480054 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.480059 | orchestrator | 2025-09-27 22:10:27.480064 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-27 22:10:27.480068 | orchestrator | 2025-09-27 22:10:27.480074 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-27 22:10:27.480079 | orchestrator | Saturday 27 September 2025 22:05:52 +0000 (0:00:00.668) 0:05:45.532 **** 2025-09-27 22:10:27.480084 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.480090 | orchestrator | 2025-09-27 22:10:27.480095 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-27 22:10:27.480099 | orchestrator | Saturday 27 September 2025 22:05:53 +0000 (0:00:00.590) 0:05:46.122 **** 2025-09-27 22:10:27.480105 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.480124 | orchestrator | 2025-09-27 22:10:27.480141 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-27 22:10:27.480146 | orchestrator | Saturday 27 September 2025 22:05:53 +0000 (0:00:00.459) 0:05:46.582 **** 2025-09-27 22:10:27.480151 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.480156 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.480161 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.480166 | orchestrator | 2025-09-27 22:10:27.480170 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-27 22:10:27.480175 | orchestrator | Saturday 27 September 2025 22:05:53 +0000 (0:00:00.410) 0:05:46.992 **** 2025-09-27 22:10:27.480180 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.480185 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.480190 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.480195 | orchestrator | 2025-09-27 22:10:27.480199 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-27 22:10:27.480205 | orchestrator | Saturday 27 September 2025 22:05:54 +0000 (0:00:00.728) 0:05:47.720 **** 2025-09-27 22:10:27.480210 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.480214 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.480219 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.480224 | orchestrator | 2025-09-27 22:10:27.480228 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-27 22:10:27.480233 | orchestrator | Saturday 27 September 2025 22:05:55 +0000 (0:00:00.738) 0:05:48.458 **** 2025-09-27 22:10:27.480238 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.480243 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.480247 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.480252 | orchestrator | 2025-09-27 22:10:27.480257 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-27 22:10:27.480262 | orchestrator | Saturday 27 September 2025 22:05:56 +0000 (0:00:00.671) 0:05:49.130 **** 2025-09-27 22:10:27.480266 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.480271 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.480276 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.480281 | orchestrator | 2025-09-27 22:10:27.480285 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-27 22:10:27.480295 | orchestrator | Saturday 27 September 2025 22:05:56 +0000 (0:00:00.525) 0:05:49.655 **** 2025-09-27 22:10:27.480300 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.480305 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.480310 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.480315 | orchestrator | 2025-09-27 22:10:27.480319 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-27 22:10:27.480324 | orchestrator | Saturday 27 September 2025 22:05:56 +0000 (0:00:00.271) 0:05:49.927 **** 2025-09-27 22:10:27.480329 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.480334 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.480338 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.480344 | orchestrator | 2025-09-27 22:10:27.480349 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-27 22:10:27.480353 | orchestrator | Saturday 27 September 2025 22:05:57 +0000 (0:00:00.259) 0:05:50.186 **** 2025-09-27 22:10:27.480358 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.480363 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.480368 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.480373 | orchestrator | 2025-09-27 22:10:27.480378 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-27 22:10:27.480383 | orchestrator | Saturday 27 September 2025 22:05:57 +0000 (0:00:00.616) 0:05:50.803 **** 2025-09-27 22:10:27.480387 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.480392 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.480397 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.480402 | orchestrator | 2025-09-27 22:10:27.480407 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-27 22:10:27.480411 | orchestrator | Saturday 27 September 2025 22:05:58 +0000 (0:00:01.027) 0:05:51.831 **** 2025-09-27 22:10:27.480416 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.480421 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.480426 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.480430 | orchestrator | 2025-09-27 22:10:27.480435 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-27 22:10:27.480440 | orchestrator | Saturday 27 September 2025 22:05:59 +0000 (0:00:00.339) 0:05:52.171 **** 2025-09-27 22:10:27.480449 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.480454 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.480459 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.480464 | orchestrator | 2025-09-27 22:10:27.480469 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-27 22:10:27.480474 | orchestrator | Saturday 27 September 2025 22:05:59 +0000 (0:00:00.377) 0:05:52.549 **** 2025-09-27 22:10:27.480479 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.480483 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.480488 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.480493 | orchestrator | 2025-09-27 22:10:27.480497 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-27 22:10:27.480502 | orchestrator | Saturday 27 September 2025 22:05:59 +0000 (0:00:00.323) 0:05:52.872 **** 2025-09-27 22:10:27.480507 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.480512 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.480517 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.480522 | orchestrator | 2025-09-27 22:10:27.480527 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-27 22:10:27.480532 | orchestrator | Saturday 27 September 2025 22:06:00 +0000 (0:00:00.336) 0:05:53.209 **** 2025-09-27 22:10:27.480537 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.480542 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.480546 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.480551 | orchestrator | 2025-09-27 22:10:27.480556 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-27 22:10:27.480561 | orchestrator | Saturday 27 September 2025 22:06:00 +0000 (0:00:00.710) 0:05:53.919 **** 2025-09-27 22:10:27.480571 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.480576 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.480580 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.480585 | orchestrator | 2025-09-27 22:10:27.480590 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-27 22:10:27.480599 | orchestrator | Saturday 27 September 2025 22:06:01 +0000 (0:00:00.344) 0:05:54.264 **** 2025-09-27 22:10:27.480604 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.480609 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.480614 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.480619 | orchestrator | 2025-09-27 22:10:27.480623 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-27 22:10:27.480628 | orchestrator | Saturday 27 September 2025 22:06:01 +0000 (0:00:00.339) 0:05:54.603 **** 2025-09-27 22:10:27.480633 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.480637 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.480642 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.480647 | orchestrator | 2025-09-27 22:10:27.480651 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-27 22:10:27.480656 | orchestrator | Saturday 27 September 2025 22:06:01 +0000 (0:00:00.321) 0:05:54.925 **** 2025-09-27 22:10:27.480661 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.480666 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.480670 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.480675 | orchestrator | 2025-09-27 22:10:27.480680 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-27 22:10:27.480685 | orchestrator | Saturday 27 September 2025 22:06:02 +0000 (0:00:00.622) 0:05:55.547 **** 2025-09-27 22:10:27.480689 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.480694 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.480699 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.480704 | orchestrator | 2025-09-27 22:10:27.480709 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-27 22:10:27.480713 | orchestrator | Saturday 27 September 2025 22:06:03 +0000 (0:00:00.556) 0:05:56.104 **** 2025-09-27 22:10:27.480718 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.480723 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.480728 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.480732 | orchestrator | 2025-09-27 22:10:27.480737 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-27 22:10:27.480742 | orchestrator | Saturday 27 September 2025 22:06:03 +0000 (0:00:00.329) 0:05:56.434 **** 2025-09-27 22:10:27.480747 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-27 22:10:27.480751 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-27 22:10:27.480756 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-27 22:10:27.480761 | orchestrator | 2025-09-27 22:10:27.480765 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-27 22:10:27.480770 | orchestrator | Saturday 27 September 2025 22:06:04 +0000 (0:00:01.177) 0:05:57.611 **** 2025-09-27 22:10:27.480775 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.480779 | orchestrator | 2025-09-27 22:10:27.480784 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-27 22:10:27.480789 | orchestrator | Saturday 27 September 2025 22:06:05 +0000 (0:00:00.555) 0:05:58.167 **** 2025-09-27 22:10:27.480794 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.480798 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.480803 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.480808 | orchestrator | 2025-09-27 22:10:27.480813 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-27 22:10:27.480818 | orchestrator | Saturday 27 September 2025 22:06:05 +0000 (0:00:00.336) 0:05:58.504 **** 2025-09-27 22:10:27.480827 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.480832 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.480837 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.480841 | orchestrator | 2025-09-27 22:10:27.480847 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-27 22:10:27.480851 | orchestrator | Saturday 27 September 2025 22:06:06 +0000 (0:00:00.544) 0:05:59.048 **** 2025-09-27 22:10:27.480856 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.480861 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.480865 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.480870 | orchestrator | 2025-09-27 22:10:27.480875 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-27 22:10:27.480880 | orchestrator | Saturday 27 September 2025 22:06:06 +0000 (0:00:00.599) 0:05:59.648 **** 2025-09-27 22:10:27.480885 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.480889 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.480894 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.480899 | orchestrator | 2025-09-27 22:10:27.480904 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-27 22:10:27.480909 | orchestrator | Saturday 27 September 2025 22:06:06 +0000 (0:00:00.354) 0:06:00.003 **** 2025-09-27 22:10:27.480914 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-27 22:10:27.480919 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-27 22:10:27.480923 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-27 22:10:27.480928 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-27 22:10:27.480933 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-27 22:10:27.480938 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-27 22:10:27.480942 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-27 22:10:27.480947 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-27 22:10:27.480952 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-27 22:10:27.480963 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-27 22:10:27.480968 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-27 22:10:27.480973 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-27 22:10:27.480977 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-27 22:10:27.480982 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-27 22:10:27.480987 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-27 22:10:27.480992 | orchestrator | 2025-09-27 22:10:27.480996 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-27 22:10:27.481001 | orchestrator | Saturday 27 September 2025 22:06:10 +0000 (0:00:03.192) 0:06:03.195 **** 2025-09-27 22:10:27.481006 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.481010 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.481015 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.481020 | orchestrator | 2025-09-27 22:10:27.481025 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-27 22:10:27.481029 | orchestrator | Saturday 27 September 2025 22:06:10 +0000 (0:00:00.537) 0:06:03.733 **** 2025-09-27 22:10:27.481034 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.481039 | orchestrator | 2025-09-27 22:10:27.481044 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-27 22:10:27.481052 | orchestrator | Saturday 27 September 2025 22:06:11 +0000 (0:00:00.546) 0:06:04.280 **** 2025-09-27 22:10:27.481057 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-27 22:10:27.481062 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-27 22:10:27.481067 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-27 22:10:27.481072 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-27 22:10:27.481077 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-27 22:10:27.481082 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-27 22:10:27.481086 | orchestrator | 2025-09-27 22:10:27.481091 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-27 22:10:27.481096 | orchestrator | Saturday 27 September 2025 22:06:12 +0000 (0:00:01.043) 0:06:05.323 **** 2025-09-27 22:10:27.481100 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:10:27.481105 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-27 22:10:27.481183 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-27 22:10:27.481203 | orchestrator | 2025-09-27 22:10:27.481209 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-27 22:10:27.481214 | orchestrator | Saturday 27 September 2025 22:06:14 +0000 (0:00:02.204) 0:06:07.528 **** 2025-09-27 22:10:27.481219 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-27 22:10:27.481224 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-27 22:10:27.481229 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.481234 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-27 22:10:27.481238 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-27 22:10:27.481243 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.481248 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-27 22:10:27.481253 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-27 22:10:27.481257 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.481262 | orchestrator | 2025-09-27 22:10:27.481267 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-27 22:10:27.481272 | orchestrator | Saturday 27 September 2025 22:06:16 +0000 (0:00:01.637) 0:06:09.165 **** 2025-09-27 22:10:27.481277 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-27 22:10:27.481282 | orchestrator | 2025-09-27 22:10:27.481290 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-27 22:10:27.481295 | orchestrator | Saturday 27 September 2025 22:06:18 +0000 (0:00:01.913) 0:06:11.079 **** 2025-09-27 22:10:27.481300 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.481305 | orchestrator | 2025-09-27 22:10:27.481310 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-27 22:10:27.481315 | orchestrator | Saturday 27 September 2025 22:06:18 +0000 (0:00:00.527) 0:06:11.607 **** 2025-09-27 22:10:27.481320 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491', 'data_vg': 'ceph-3ef55d2f-0db9-555d-b1b6-fd7fdf57b491'}) 2025-09-27 22:10:27.481327 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-be08f40e-52da-5801-960c-910a686d222b', 'data_vg': 'ceph-be08f40e-52da-5801-960c-910a686d222b'}) 2025-09-27 22:10:27.481332 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2625e84f-b704-594b-a79a-2de5db7d7d7c', 'data_vg': 'ceph-2625e84f-b704-594b-a79a-2de5db7d7d7c'}) 2025-09-27 22:10:27.481337 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8d8c80c3-887a-53bd-bc85-16ee8bc68188', 'data_vg': 'ceph-8d8c80c3-887a-53bd-bc85-16ee8bc68188'}) 2025-09-27 22:10:27.481342 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a2801305-6ac8-5a65-9707-7cc055d05458', 'data_vg': 'ceph-a2801305-6ac8-5a65-9707-7cc055d05458'}) 2025-09-27 22:10:27.481356 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-30a62591-9a6e-5933-8bc7-7c2bee7235f5', 'data_vg': 'ceph-30a62591-9a6e-5933-8bc7-7c2bee7235f5'}) 2025-09-27 22:10:27.481361 | orchestrator | 2025-09-27 22:10:27.481366 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-27 22:10:27.481371 | orchestrator | Saturday 27 September 2025 22:07:05 +0000 (0:00:46.498) 0:06:58.105 **** 2025-09-27 22:10:27.481375 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.481380 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.481385 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.481390 | orchestrator | 2025-09-27 22:10:27.481395 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-27 22:10:27.481400 | orchestrator | Saturday 27 September 2025 22:07:05 +0000 (0:00:00.567) 0:06:58.673 **** 2025-09-27 22:10:27.481404 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.481409 | orchestrator | 2025-09-27 22:10:27.481414 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-27 22:10:27.481419 | orchestrator | Saturday 27 September 2025 22:07:06 +0000 (0:00:00.566) 0:06:59.239 **** 2025-09-27 22:10:27.481424 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.481429 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.481433 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.481438 | orchestrator | 2025-09-27 22:10:27.481443 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-27 22:10:27.481448 | orchestrator | Saturday 27 September 2025 22:07:06 +0000 (0:00:00.664) 0:06:59.904 **** 2025-09-27 22:10:27.481453 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.481457 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.481462 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.481467 | orchestrator | 2025-09-27 22:10:27.481471 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-27 22:10:27.481476 | orchestrator | Saturday 27 September 2025 22:07:09 +0000 (0:00:02.807) 0:07:02.712 **** 2025-09-27 22:10:27.481481 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.481486 | orchestrator | 2025-09-27 22:10:27.481491 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-27 22:10:27.481496 | orchestrator | Saturday 27 September 2025 22:07:10 +0000 (0:00:00.519) 0:07:03.231 **** 2025-09-27 22:10:27.481500 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.481505 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.481510 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.481515 | orchestrator | 2025-09-27 22:10:27.481520 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-27 22:10:27.481524 | orchestrator | Saturday 27 September 2025 22:07:11 +0000 (0:00:01.155) 0:07:04.386 **** 2025-09-27 22:10:27.481529 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.481534 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.481539 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.481544 | orchestrator | 2025-09-27 22:10:27.481548 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-27 22:10:27.481553 | orchestrator | Saturday 27 September 2025 22:07:12 +0000 (0:00:01.316) 0:07:05.703 **** 2025-09-27 22:10:27.481558 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.481563 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.481567 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.481572 | orchestrator | 2025-09-27 22:10:27.481577 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-27 22:10:27.481582 | orchestrator | Saturday 27 September 2025 22:07:15 +0000 (0:00:02.552) 0:07:08.255 **** 2025-09-27 22:10:27.481587 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.481592 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.481601 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.481606 | orchestrator | 2025-09-27 22:10:27.481611 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-27 22:10:27.481616 | orchestrator | Saturday 27 September 2025 22:07:15 +0000 (0:00:00.333) 0:07:08.589 **** 2025-09-27 22:10:27.481621 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.481625 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.481635 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.481640 | orchestrator | 2025-09-27 22:10:27.481645 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-27 22:10:27.481649 | orchestrator | Saturday 27 September 2025 22:07:15 +0000 (0:00:00.327) 0:07:08.916 **** 2025-09-27 22:10:27.481654 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-09-27 22:10:27.481659 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-27 22:10:27.481663 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-09-27 22:10:27.481668 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-09-27 22:10:27.481673 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-09-27 22:10:27.481678 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-09-27 22:10:27.481682 | orchestrator | 2025-09-27 22:10:27.481687 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-27 22:10:27.481692 | orchestrator | Saturday 27 September 2025 22:07:17 +0000 (0:00:01.303) 0:07:10.219 **** 2025-09-27 22:10:27.481696 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-09-27 22:10:27.481701 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-27 22:10:27.481707 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-09-27 22:10:27.481711 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-27 22:10:27.481716 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-27 22:10:27.481721 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-27 22:10:27.481725 | orchestrator | 2025-09-27 22:10:27.481730 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-27 22:10:27.481735 | orchestrator | Saturday 27 September 2025 22:07:19 +0000 (0:00:02.125) 0:07:12.345 **** 2025-09-27 22:10:27.481740 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-09-27 22:10:27.481745 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-27 22:10:27.481750 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-09-27 22:10:27.481754 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-27 22:10:27.481762 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-27 22:10:27.481767 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-27 22:10:27.481773 | orchestrator | 2025-09-27 22:10:27.481777 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-27 22:10:27.481782 | orchestrator | Saturday 27 September 2025 22:07:22 +0000 (0:00:03.561) 0:07:15.906 **** 2025-09-27 22:10:27.481787 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.481791 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.481796 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-27 22:10:27.481801 | orchestrator | 2025-09-27 22:10:27.481806 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-27 22:10:27.481810 | orchestrator | Saturday 27 September 2025 22:07:25 +0000 (0:00:02.148) 0:07:18.055 **** 2025-09-27 22:10:27.481815 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.481820 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.481825 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-27 22:10:27.481830 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-27 22:10:27.481834 | orchestrator | 2025-09-27 22:10:27.481840 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-27 22:10:27.481845 | orchestrator | Saturday 27 September 2025 22:07:37 +0000 (0:00:12.787) 0:07:30.842 **** 2025-09-27 22:10:27.481849 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.481854 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.481864 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.481868 | orchestrator | 2025-09-27 22:10:27.481873 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-27 22:10:27.481878 | orchestrator | Saturday 27 September 2025 22:07:38 +0000 (0:00:00.802) 0:07:31.644 **** 2025-09-27 22:10:27.481883 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.481888 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.481892 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.481897 | orchestrator | 2025-09-27 22:10:27.481902 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-27 22:10:27.481907 | orchestrator | Saturday 27 September 2025 22:07:39 +0000 (0:00:00.586) 0:07:32.231 **** 2025-09-27 22:10:27.481911 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.481916 | orchestrator | 2025-09-27 22:10:27.481921 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-27 22:10:27.481926 | orchestrator | Saturday 27 September 2025 22:07:39 +0000 (0:00:00.503) 0:07:32.734 **** 2025-09-27 22:10:27.481931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 22:10:27.481935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 22:10:27.481940 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 22:10:27.481945 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.481950 | orchestrator | 2025-09-27 22:10:27.481954 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-27 22:10:27.481959 | orchestrator | Saturday 27 September 2025 22:07:40 +0000 (0:00:00.433) 0:07:33.167 **** 2025-09-27 22:10:27.481964 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.481969 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.481974 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.481978 | orchestrator | 2025-09-27 22:10:27.481983 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-27 22:10:27.481988 | orchestrator | Saturday 27 September 2025 22:07:40 +0000 (0:00:00.581) 0:07:33.749 **** 2025-09-27 22:10:27.481992 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.481997 | orchestrator | 2025-09-27 22:10:27.482002 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-27 22:10:27.482007 | orchestrator | Saturday 27 September 2025 22:07:40 +0000 (0:00:00.212) 0:07:33.961 **** 2025-09-27 22:10:27.482011 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.482060 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.482066 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.482070 | orchestrator | 2025-09-27 22:10:27.482079 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-27 22:10:27.482084 | orchestrator | Saturday 27 September 2025 22:07:41 +0000 (0:00:00.307) 0:07:34.269 **** 2025-09-27 22:10:27.482089 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.482094 | orchestrator | 2025-09-27 22:10:27.482099 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-27 22:10:27.482103 | orchestrator | Saturday 27 September 2025 22:07:41 +0000 (0:00:00.226) 0:07:34.495 **** 2025-09-27 22:10:27.482108 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.482151 | orchestrator | 2025-09-27 22:10:27.482156 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-27 22:10:27.482161 | orchestrator | Saturday 27 September 2025 22:07:41 +0000 (0:00:00.237) 0:07:34.733 **** 2025-09-27 22:10:27.482166 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.482170 | orchestrator | 2025-09-27 22:10:27.482175 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-27 22:10:27.482180 | orchestrator | Saturday 27 September 2025 22:07:41 +0000 (0:00:00.128) 0:07:34.862 **** 2025-09-27 22:10:27.482185 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.482190 | orchestrator | 2025-09-27 22:10:27.482195 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-27 22:10:27.482206 | orchestrator | Saturday 27 September 2025 22:07:42 +0000 (0:00:00.207) 0:07:35.070 **** 2025-09-27 22:10:27.482211 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.482216 | orchestrator | 2025-09-27 22:10:27.482221 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-27 22:10:27.482225 | orchestrator | Saturday 27 September 2025 22:07:42 +0000 (0:00:00.251) 0:07:35.321 **** 2025-09-27 22:10:27.482230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 22:10:27.482235 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 22:10:27.482245 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 22:10:27.482250 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.482255 | orchestrator | 2025-09-27 22:10:27.482260 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-27 22:10:27.482265 | orchestrator | Saturday 27 September 2025 22:07:42 +0000 (0:00:00.682) 0:07:36.004 **** 2025-09-27 22:10:27.482269 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.482274 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.482280 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.482284 | orchestrator | 2025-09-27 22:10:27.482289 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-27 22:10:27.482294 | orchestrator | Saturday 27 September 2025 22:07:43 +0000 (0:00:00.612) 0:07:36.617 **** 2025-09-27 22:10:27.482299 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.482303 | orchestrator | 2025-09-27 22:10:27.482308 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-27 22:10:27.482313 | orchestrator | Saturday 27 September 2025 22:07:43 +0000 (0:00:00.225) 0:07:36.842 **** 2025-09-27 22:10:27.482318 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.482323 | orchestrator | 2025-09-27 22:10:27.482327 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-27 22:10:27.482332 | orchestrator | 2025-09-27 22:10:27.482337 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-27 22:10:27.482342 | orchestrator | Saturday 27 September 2025 22:07:44 +0000 (0:00:00.637) 0:07:37.480 **** 2025-09-27 22:10:27.482347 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.482353 | orchestrator | 2025-09-27 22:10:27.482359 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-27 22:10:27.482363 | orchestrator | Saturday 27 September 2025 22:07:45 +0000 (0:00:01.224) 0:07:38.704 **** 2025-09-27 22:10:27.482369 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.482374 | orchestrator | 2025-09-27 22:10:27.482378 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-27 22:10:27.482383 | orchestrator | Saturday 27 September 2025 22:07:46 +0000 (0:00:01.190) 0:07:39.895 **** 2025-09-27 22:10:27.482388 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.482392 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.482397 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.482402 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.482407 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.482412 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.482416 | orchestrator | 2025-09-27 22:10:27.482421 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-27 22:10:27.482426 | orchestrator | Saturday 27 September 2025 22:07:47 +0000 (0:00:00.848) 0:07:40.743 **** 2025-09-27 22:10:27.482431 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.482436 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.482441 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.482449 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.482454 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.482459 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.482464 | orchestrator | 2025-09-27 22:10:27.482468 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-27 22:10:27.482473 | orchestrator | Saturday 27 September 2025 22:07:48 +0000 (0:00:00.980) 0:07:41.724 **** 2025-09-27 22:10:27.482478 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.482483 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.482488 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.482492 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.482497 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.482502 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.482507 | orchestrator | 2025-09-27 22:10:27.482512 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-27 22:10:27.482517 | orchestrator | Saturday 27 September 2025 22:07:49 +0000 (0:00:01.226) 0:07:42.950 **** 2025-09-27 22:10:27.482526 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.482531 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.482535 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.482540 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.482545 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.482550 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.482554 | orchestrator | 2025-09-27 22:10:27.482560 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-27 22:10:27.482564 | orchestrator | Saturday 27 September 2025 22:07:50 +0000 (0:00:01.031) 0:07:43.983 **** 2025-09-27 22:10:27.482569 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.482574 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.482579 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.482584 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.482590 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.482597 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.482604 | orchestrator | 2025-09-27 22:10:27.482612 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-27 22:10:27.482620 | orchestrator | Saturday 27 September 2025 22:07:51 +0000 (0:00:00.959) 0:07:44.942 **** 2025-09-27 22:10:27.482627 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.482634 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.482642 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.482649 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.482656 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.482663 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.482670 | orchestrator | 2025-09-27 22:10:27.482677 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-27 22:10:27.482684 | orchestrator | Saturday 27 September 2025 22:07:52 +0000 (0:00:00.592) 0:07:45.535 **** 2025-09-27 22:10:27.482692 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.482700 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.482709 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.482722 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.482730 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.482735 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.482740 | orchestrator | 2025-09-27 22:10:27.482744 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-27 22:10:27.482749 | orchestrator | Saturday 27 September 2025 22:07:53 +0000 (0:00:00.829) 0:07:46.365 **** 2025-09-27 22:10:27.482753 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.482758 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.482763 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.482767 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.482772 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.482776 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.482781 | orchestrator | 2025-09-27 22:10:27.482786 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-27 22:10:27.482795 | orchestrator | Saturday 27 September 2025 22:07:54 +0000 (0:00:01.062) 0:07:47.428 **** 2025-09-27 22:10:27.482800 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.482804 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.482809 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.482814 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.482818 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.482823 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.482827 | orchestrator | 2025-09-27 22:10:27.482832 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-27 22:10:27.482836 | orchestrator | Saturday 27 September 2025 22:07:55 +0000 (0:00:01.398) 0:07:48.826 **** 2025-09-27 22:10:27.482841 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.482845 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.482853 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.482860 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.482873 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.482883 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.482890 | orchestrator | 2025-09-27 22:10:27.482896 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-27 22:10:27.482902 | orchestrator | Saturday 27 September 2025 22:07:56 +0000 (0:00:00.605) 0:07:49.432 **** 2025-09-27 22:10:27.482909 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.482916 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.482924 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.482931 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.482939 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.482947 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.482954 | orchestrator | 2025-09-27 22:10:27.482961 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-27 22:10:27.482970 | orchestrator | Saturday 27 September 2025 22:07:56 +0000 (0:00:00.563) 0:07:49.995 **** 2025-09-27 22:10:27.482974 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.482979 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.482983 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.482988 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.482992 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.482997 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.483001 | orchestrator | 2025-09-27 22:10:27.483006 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-27 22:10:27.483010 | orchestrator | Saturday 27 September 2025 22:07:57 +0000 (0:00:00.836) 0:07:50.832 **** 2025-09-27 22:10:27.483015 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.483020 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.483024 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.483029 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.483033 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.483038 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.483042 | orchestrator | 2025-09-27 22:10:27.483047 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-27 22:10:27.483051 | orchestrator | Saturday 27 September 2025 22:07:58 +0000 (0:00:00.618) 0:07:51.450 **** 2025-09-27 22:10:27.483056 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.483060 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.483065 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.483069 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.483074 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.483078 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.483083 | orchestrator | 2025-09-27 22:10:27.483087 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-27 22:10:27.483098 | orchestrator | Saturday 27 September 2025 22:07:59 +0000 (0:00:00.877) 0:07:52.328 **** 2025-09-27 22:10:27.483102 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.483108 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.483142 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.483152 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.483160 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.483167 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.483173 | orchestrator | 2025-09-27 22:10:27.483180 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-27 22:10:27.483188 | orchestrator | Saturday 27 September 2025 22:07:59 +0000 (0:00:00.602) 0:07:52.930 **** 2025-09-27 22:10:27.483195 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:10:27.483201 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:10:27.483207 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:10:27.483213 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.483220 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.483227 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.483234 | orchestrator | 2025-09-27 22:10:27.483241 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-27 22:10:27.483249 | orchestrator | Saturday 27 September 2025 22:08:00 +0000 (0:00:00.852) 0:07:53.783 **** 2025-09-27 22:10:27.483255 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.483262 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.483269 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.483276 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.483283 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.483290 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.483297 | orchestrator | 2025-09-27 22:10:27.483305 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-27 22:10:27.483312 | orchestrator | Saturday 27 September 2025 22:08:01 +0000 (0:00:00.576) 0:07:54.359 **** 2025-09-27 22:10:27.483319 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.483326 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.483341 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.483349 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.483357 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.483364 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.483371 | orchestrator | 2025-09-27 22:10:27.483379 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-27 22:10:27.483387 | orchestrator | Saturday 27 September 2025 22:08:02 +0000 (0:00:00.851) 0:07:55.211 **** 2025-09-27 22:10:27.483394 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.483403 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.483411 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.483419 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.483426 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.483433 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.483440 | orchestrator | 2025-09-27 22:10:27.483446 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-27 22:10:27.483452 | orchestrator | Saturday 27 September 2025 22:08:03 +0000 (0:00:01.227) 0:07:56.438 **** 2025-09-27 22:10:27.483459 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.483465 | orchestrator | 2025-09-27 22:10:27.483472 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-27 22:10:27.483478 | orchestrator | Saturday 27 September 2025 22:08:07 +0000 (0:00:04.040) 0:08:00.479 **** 2025-09-27 22:10:27.483484 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.483491 | orchestrator | 2025-09-27 22:10:27.483499 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-27 22:10:27.483506 | orchestrator | Saturday 27 September 2025 22:08:09 +0000 (0:00:02.098) 0:08:02.577 **** 2025-09-27 22:10:27.483514 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.483521 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.483527 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.483534 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.483540 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.483546 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.483553 | orchestrator | 2025-09-27 22:10:27.483561 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-27 22:10:27.483583 | orchestrator | Saturday 27 September 2025 22:08:11 +0000 (0:00:01.819) 0:08:04.397 **** 2025-09-27 22:10:27.483592 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.483599 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.483606 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.483614 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.483621 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.483628 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.483635 | orchestrator | 2025-09-27 22:10:27.483643 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-27 22:10:27.483650 | orchestrator | Saturday 27 September 2025 22:08:12 +0000 (0:00:00.970) 0:08:05.367 **** 2025-09-27 22:10:27.483658 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.483667 | orchestrator | 2025-09-27 22:10:27.483675 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-27 22:10:27.483683 | orchestrator | Saturday 27 September 2025 22:08:13 +0000 (0:00:01.451) 0:08:06.819 **** 2025-09-27 22:10:27.483691 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.483699 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.483706 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.483713 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.483720 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.483728 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.483735 | orchestrator | 2025-09-27 22:10:27.483742 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-27 22:10:27.483749 | orchestrator | Saturday 27 September 2025 22:08:15 +0000 (0:00:01.602) 0:08:08.421 **** 2025-09-27 22:10:27.483757 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.483764 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.483772 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.483780 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.483787 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.483794 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.483801 | orchestrator | 2025-09-27 22:10:27.483808 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-27 22:10:27.483826 | orchestrator | Saturday 27 September 2025 22:08:19 +0000 (0:00:03.642) 0:08:12.064 **** 2025-09-27 22:10:27.483835 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.483842 | orchestrator | 2025-09-27 22:10:27.483849 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-27 22:10:27.483856 | orchestrator | Saturday 27 September 2025 22:08:20 +0000 (0:00:01.262) 0:08:13.327 **** 2025-09-27 22:10:27.483863 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.483870 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.483877 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.483884 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.483891 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.483898 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.483905 | orchestrator | 2025-09-27 22:10:27.483912 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-27 22:10:27.483920 | orchestrator | Saturday 27 September 2025 22:08:21 +0000 (0:00:00.707) 0:08:14.034 **** 2025-09-27 22:10:27.483927 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:10:27.483934 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:10:27.483941 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.483948 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.483955 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:10:27.483962 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.483969 | orchestrator | 2025-09-27 22:10:27.483975 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-27 22:10:27.483988 | orchestrator | Saturday 27 September 2025 22:08:23 +0000 (0:00:02.434) 0:08:16.469 **** 2025-09-27 22:10:27.483996 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:10:27.484003 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:10:27.484011 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:10:27.484018 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.484032 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.484039 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.484044 | orchestrator | 2025-09-27 22:10:27.484048 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-27 22:10:27.484053 | orchestrator | 2025-09-27 22:10:27.484057 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-27 22:10:27.484062 | orchestrator | Saturday 27 September 2025 22:08:24 +0000 (0:00:01.104) 0:08:17.573 **** 2025-09-27 22:10:27.484067 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.484072 | orchestrator | 2025-09-27 22:10:27.484076 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-27 22:10:27.484081 | orchestrator | Saturday 27 September 2025 22:08:25 +0000 (0:00:00.591) 0:08:18.164 **** 2025-09-27 22:10:27.484085 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.484090 | orchestrator | 2025-09-27 22:10:27.484094 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-27 22:10:27.484099 | orchestrator | Saturday 27 September 2025 22:08:26 +0000 (0:00:00.854) 0:08:19.019 **** 2025-09-27 22:10:27.484103 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.484108 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.484150 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.484154 | orchestrator | 2025-09-27 22:10:27.484159 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-27 22:10:27.484163 | orchestrator | Saturday 27 September 2025 22:08:26 +0000 (0:00:00.299) 0:08:19.319 **** 2025-09-27 22:10:27.484168 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.484173 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.484177 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.484181 | orchestrator | 2025-09-27 22:10:27.484186 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-27 22:10:27.484191 | orchestrator | Saturday 27 September 2025 22:08:27 +0000 (0:00:00.706) 0:08:20.025 **** 2025-09-27 22:10:27.484195 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.484200 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.484204 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.484209 | orchestrator | 2025-09-27 22:10:27.484213 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-27 22:10:27.484218 | orchestrator | Saturday 27 September 2025 22:08:27 +0000 (0:00:00.724) 0:08:20.749 **** 2025-09-27 22:10:27.484222 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.484227 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.484231 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.484236 | orchestrator | 2025-09-27 22:10:27.484240 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-27 22:10:27.484245 | orchestrator | Saturday 27 September 2025 22:08:28 +0000 (0:00:00.956) 0:08:21.705 **** 2025-09-27 22:10:27.484249 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.484254 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.484258 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.484263 | orchestrator | 2025-09-27 22:10:27.484268 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-27 22:10:27.484272 | orchestrator | Saturday 27 September 2025 22:08:28 +0000 (0:00:00.288) 0:08:21.994 **** 2025-09-27 22:10:27.484277 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.484281 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.484291 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.484295 | orchestrator | 2025-09-27 22:10:27.484300 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-27 22:10:27.484304 | orchestrator | Saturday 27 September 2025 22:08:29 +0000 (0:00:00.292) 0:08:22.286 **** 2025-09-27 22:10:27.484309 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.484313 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.484318 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.484322 | orchestrator | 2025-09-27 22:10:27.484327 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-27 22:10:27.484331 | orchestrator | Saturday 27 September 2025 22:08:29 +0000 (0:00:00.289) 0:08:22.576 **** 2025-09-27 22:10:27.484336 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.484344 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.484349 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.484354 | orchestrator | 2025-09-27 22:10:27.484358 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-27 22:10:27.484363 | orchestrator | Saturday 27 September 2025 22:08:30 +0000 (0:00:00.965) 0:08:23.542 **** 2025-09-27 22:10:27.484367 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.484372 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.484376 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.484381 | orchestrator | 2025-09-27 22:10:27.484385 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-27 22:10:27.484390 | orchestrator | Saturday 27 September 2025 22:08:31 +0000 (0:00:00.729) 0:08:24.271 **** 2025-09-27 22:10:27.484395 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.484399 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.484404 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.484408 | orchestrator | 2025-09-27 22:10:27.484413 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-27 22:10:27.484417 | orchestrator | Saturday 27 September 2025 22:08:31 +0000 (0:00:00.313) 0:08:24.585 **** 2025-09-27 22:10:27.484422 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.484426 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.484431 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.484435 | orchestrator | 2025-09-27 22:10:27.484440 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-27 22:10:27.484444 | orchestrator | Saturday 27 September 2025 22:08:31 +0000 (0:00:00.310) 0:08:24.895 **** 2025-09-27 22:10:27.484449 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.484453 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.484458 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.484462 | orchestrator | 2025-09-27 22:10:27.484467 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-27 22:10:27.484475 | orchestrator | Saturday 27 September 2025 22:08:32 +0000 (0:00:00.603) 0:08:25.499 **** 2025-09-27 22:10:27.484480 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.484484 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.484489 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.484493 | orchestrator | 2025-09-27 22:10:27.484498 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-27 22:10:27.484503 | orchestrator | Saturday 27 September 2025 22:08:32 +0000 (0:00:00.324) 0:08:25.823 **** 2025-09-27 22:10:27.484507 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.484512 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.484516 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.484521 | orchestrator | 2025-09-27 22:10:27.484525 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-27 22:10:27.484530 | orchestrator | Saturday 27 September 2025 22:08:33 +0000 (0:00:00.342) 0:08:26.165 **** 2025-09-27 22:10:27.484534 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.484539 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.484544 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.484548 | orchestrator | 2025-09-27 22:10:27.484556 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-27 22:10:27.484560 | orchestrator | Saturday 27 September 2025 22:08:33 +0000 (0:00:00.369) 0:08:26.535 **** 2025-09-27 22:10:27.484564 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.484568 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.484572 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.484576 | orchestrator | 2025-09-27 22:10:27.484580 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-27 22:10:27.484584 | orchestrator | Saturday 27 September 2025 22:08:34 +0000 (0:00:00.559) 0:08:27.095 **** 2025-09-27 22:10:27.484588 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.484592 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.484597 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.484601 | orchestrator | 2025-09-27 22:10:27.484605 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-27 22:10:27.484609 | orchestrator | Saturday 27 September 2025 22:08:34 +0000 (0:00:00.305) 0:08:27.401 **** 2025-09-27 22:10:27.484613 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.484617 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.484621 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.484625 | orchestrator | 2025-09-27 22:10:27.484629 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-27 22:10:27.484633 | orchestrator | Saturday 27 September 2025 22:08:34 +0000 (0:00:00.334) 0:08:27.735 **** 2025-09-27 22:10:27.484637 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.484641 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.484645 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.484649 | orchestrator | 2025-09-27 22:10:27.484654 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-27 22:10:27.484658 | orchestrator | Saturday 27 September 2025 22:08:35 +0000 (0:00:00.760) 0:08:28.496 **** 2025-09-27 22:10:27.484662 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.484666 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.484670 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-27 22:10:27.484675 | orchestrator | 2025-09-27 22:10:27.484679 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-27 22:10:27.484683 | orchestrator | Saturday 27 September 2025 22:08:35 +0000 (0:00:00.437) 0:08:28.934 **** 2025-09-27 22:10:27.484687 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-27 22:10:27.484692 | orchestrator | 2025-09-27 22:10:27.484696 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-27 22:10:27.484700 | orchestrator | Saturday 27 September 2025 22:08:38 +0000 (0:00:02.190) 0:08:31.124 **** 2025-09-27 22:10:27.484706 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-27 22:10:27.484712 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.484716 | orchestrator | 2025-09-27 22:10:27.484720 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-27 22:10:27.484727 | orchestrator | Saturday 27 September 2025 22:08:38 +0000 (0:00:00.225) 0:08:31.349 **** 2025-09-27 22:10:27.484733 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-27 22:10:27.484744 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-27 22:10:27.484748 | orchestrator | 2025-09-27 22:10:27.484753 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-27 22:10:27.484760 | orchestrator | Saturday 27 September 2025 22:08:46 +0000 (0:00:08.480) 0:08:39.829 **** 2025-09-27 22:10:27.484764 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-27 22:10:27.484768 | orchestrator | 2025-09-27 22:10:27.484772 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-27 22:10:27.484776 | orchestrator | Saturday 27 September 2025 22:08:50 +0000 (0:00:03.694) 0:08:43.524 **** 2025-09-27 22:10:27.484780 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.484784 | orchestrator | 2025-09-27 22:10:27.484788 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-27 22:10:27.484796 | orchestrator | Saturday 27 September 2025 22:08:51 +0000 (0:00:00.838) 0:08:44.362 **** 2025-09-27 22:10:27.484800 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-27 22:10:27.484804 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-27 22:10:27.484808 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-27 22:10:27.484812 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-27 22:10:27.484816 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-27 22:10:27.484821 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-27 22:10:27.484825 | orchestrator | 2025-09-27 22:10:27.484829 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-27 22:10:27.484833 | orchestrator | Saturday 27 September 2025 22:08:52 +0000 (0:00:01.106) 0:08:45.469 **** 2025-09-27 22:10:27.484837 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:10:27.484841 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-27 22:10:27.484845 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-27 22:10:27.484850 | orchestrator | 2025-09-27 22:10:27.484854 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-27 22:10:27.484858 | orchestrator | Saturday 27 September 2025 22:08:54 +0000 (0:00:02.212) 0:08:47.681 **** 2025-09-27 22:10:27.484862 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-27 22:10:27.484867 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-27 22:10:27.484871 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.484875 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-27 22:10:27.484879 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-27 22:10:27.484883 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.484887 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-27 22:10:27.484891 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-27 22:10:27.484895 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.484899 | orchestrator | 2025-09-27 22:10:27.484903 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-27 22:10:27.484908 | orchestrator | Saturday 27 September 2025 22:08:55 +0000 (0:00:01.307) 0:08:48.988 **** 2025-09-27 22:10:27.484912 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.484916 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.484920 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.484924 | orchestrator | 2025-09-27 22:10:27.484928 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-27 22:10:27.484932 | orchestrator | Saturday 27 September 2025 22:08:58 +0000 (0:00:02.889) 0:08:51.878 **** 2025-09-27 22:10:27.484936 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.484940 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.484944 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.484948 | orchestrator | 2025-09-27 22:10:27.484952 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-27 22:10:27.484960 | orchestrator | Saturday 27 September 2025 22:08:59 +0000 (0:00:00.342) 0:08:52.220 **** 2025-09-27 22:10:27.484964 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.484968 | orchestrator | 2025-09-27 22:10:27.484972 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-27 22:10:27.484976 | orchestrator | Saturday 27 September 2025 22:08:59 +0000 (0:00:00.570) 0:08:52.791 **** 2025-09-27 22:10:27.484980 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.484985 | orchestrator | 2025-09-27 22:10:27.484989 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-27 22:10:27.484993 | orchestrator | Saturday 27 September 2025 22:09:00 +0000 (0:00:00.868) 0:08:53.659 **** 2025-09-27 22:10:27.484997 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.485001 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.485008 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.485012 | orchestrator | 2025-09-27 22:10:27.485016 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-27 22:10:27.485020 | orchestrator | Saturday 27 September 2025 22:09:01 +0000 (0:00:01.260) 0:08:54.920 **** 2025-09-27 22:10:27.485024 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.485029 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.485033 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.485037 | orchestrator | 2025-09-27 22:10:27.485041 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-27 22:10:27.485045 | orchestrator | Saturday 27 September 2025 22:09:03 +0000 (0:00:01.111) 0:08:56.031 **** 2025-09-27 22:10:27.485049 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.485053 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.485057 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.485062 | orchestrator | 2025-09-27 22:10:27.485066 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-27 22:10:27.485070 | orchestrator | Saturday 27 September 2025 22:09:05 +0000 (0:00:02.013) 0:08:58.045 **** 2025-09-27 22:10:27.485074 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.485078 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.485082 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.485086 | orchestrator | 2025-09-27 22:10:27.485090 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-27 22:10:27.485094 | orchestrator | Saturday 27 September 2025 22:09:06 +0000 (0:00:01.826) 0:08:59.871 **** 2025-09-27 22:10:27.485098 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.485102 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.485107 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.485122 | orchestrator | 2025-09-27 22:10:27.485126 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-27 22:10:27.485130 | orchestrator | Saturday 27 September 2025 22:09:08 +0000 (0:00:01.481) 0:09:01.353 **** 2025-09-27 22:10:27.485138 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.485143 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.485147 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.485151 | orchestrator | 2025-09-27 22:10:27.485155 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-27 22:10:27.485159 | orchestrator | Saturday 27 September 2025 22:09:09 +0000 (0:00:00.677) 0:09:02.030 **** 2025-09-27 22:10:27.485163 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.485167 | orchestrator | 2025-09-27 22:10:27.485171 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-27 22:10:27.485175 | orchestrator | Saturday 27 September 2025 22:09:09 +0000 (0:00:00.441) 0:09:02.472 **** 2025-09-27 22:10:27.485180 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.485184 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.485191 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.485195 | orchestrator | 2025-09-27 22:10:27.485199 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-27 22:10:27.485203 | orchestrator | Saturday 27 September 2025 22:09:09 +0000 (0:00:00.392) 0:09:02.864 **** 2025-09-27 22:10:27.485208 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.485212 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.485216 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.485220 | orchestrator | 2025-09-27 22:10:27.485224 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-27 22:10:27.485228 | orchestrator | Saturday 27 September 2025 22:09:11 +0000 (0:00:01.183) 0:09:04.048 **** 2025-09-27 22:10:27.485232 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 22:10:27.485236 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 22:10:27.485241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 22:10:27.485245 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.485249 | orchestrator | 2025-09-27 22:10:27.485253 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-27 22:10:27.485257 | orchestrator | Saturday 27 September 2025 22:09:11 +0000 (0:00:00.622) 0:09:04.670 **** 2025-09-27 22:10:27.485261 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.485265 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.485269 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.485273 | orchestrator | 2025-09-27 22:10:27.485277 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-27 22:10:27.485281 | orchestrator | 2025-09-27 22:10:27.485286 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-27 22:10:27.485290 | orchestrator | Saturday 27 September 2025 22:09:12 +0000 (0:00:00.943) 0:09:05.614 **** 2025-09-27 22:10:27.485294 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.485298 | orchestrator | 2025-09-27 22:10:27.485302 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-27 22:10:27.485306 | orchestrator | Saturday 27 September 2025 22:09:13 +0000 (0:00:01.399) 0:09:07.013 **** 2025-09-27 22:10:27.485310 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.485314 | orchestrator | 2025-09-27 22:10:27.485318 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-27 22:10:27.485322 | orchestrator | Saturday 27 September 2025 22:09:14 +0000 (0:00:00.823) 0:09:07.837 **** 2025-09-27 22:10:27.485327 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.485331 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.485335 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.485339 | orchestrator | 2025-09-27 22:10:27.485343 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-27 22:10:27.485347 | orchestrator | Saturday 27 September 2025 22:09:15 +0000 (0:00:00.949) 0:09:08.786 **** 2025-09-27 22:10:27.485351 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.485355 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.485359 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.485363 | orchestrator | 2025-09-27 22:10:27.485370 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-27 22:10:27.485374 | orchestrator | Saturday 27 September 2025 22:09:16 +0000 (0:00:00.838) 0:09:09.624 **** 2025-09-27 22:10:27.485378 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.485382 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.485386 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.485390 | orchestrator | 2025-09-27 22:10:27.485395 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-27 22:10:27.485399 | orchestrator | Saturday 27 September 2025 22:09:17 +0000 (0:00:00.763) 0:09:10.388 **** 2025-09-27 22:10:27.485403 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.485410 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.485414 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.485418 | orchestrator | 2025-09-27 22:10:27.485422 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-27 22:10:27.485426 | orchestrator | Saturday 27 September 2025 22:09:18 +0000 (0:00:00.820) 0:09:11.208 **** 2025-09-27 22:10:27.485430 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.485434 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.485438 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.485442 | orchestrator | 2025-09-27 22:10:27.485447 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-27 22:10:27.485451 | orchestrator | Saturday 27 September 2025 22:09:18 +0000 (0:00:00.782) 0:09:11.991 **** 2025-09-27 22:10:27.485455 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.485459 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.485463 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.485467 | orchestrator | 2025-09-27 22:10:27.485471 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-27 22:10:27.485475 | orchestrator | Saturday 27 September 2025 22:09:19 +0000 (0:00:00.366) 0:09:12.358 **** 2025-09-27 22:10:27.485479 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.485483 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.485490 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.485494 | orchestrator | 2025-09-27 22:10:27.485498 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-27 22:10:27.485503 | orchestrator | Saturday 27 September 2025 22:09:19 +0000 (0:00:00.356) 0:09:12.714 **** 2025-09-27 22:10:27.485507 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.485511 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.485515 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.485519 | orchestrator | 2025-09-27 22:10:27.485523 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-27 22:10:27.485527 | orchestrator | Saturday 27 September 2025 22:09:20 +0000 (0:00:00.756) 0:09:13.470 **** 2025-09-27 22:10:27.485531 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.485535 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.485539 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.485543 | orchestrator | 2025-09-27 22:10:27.485547 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-27 22:10:27.485551 | orchestrator | Saturday 27 September 2025 22:09:21 +0000 (0:00:00.967) 0:09:14.437 **** 2025-09-27 22:10:27.485555 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.485559 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.485563 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.485567 | orchestrator | 2025-09-27 22:10:27.485571 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-27 22:10:27.485575 | orchestrator | Saturday 27 September 2025 22:09:21 +0000 (0:00:00.310) 0:09:14.747 **** 2025-09-27 22:10:27.485580 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.485584 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.485588 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.485592 | orchestrator | 2025-09-27 22:10:27.485596 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-27 22:10:27.485600 | orchestrator | Saturday 27 September 2025 22:09:22 +0000 (0:00:00.314) 0:09:15.062 **** 2025-09-27 22:10:27.485604 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.485608 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.485612 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.485616 | orchestrator | 2025-09-27 22:10:27.485620 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-27 22:10:27.485624 | orchestrator | Saturday 27 September 2025 22:09:22 +0000 (0:00:00.333) 0:09:15.395 **** 2025-09-27 22:10:27.485628 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.485632 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.485640 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.485644 | orchestrator | 2025-09-27 22:10:27.485648 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-27 22:10:27.485652 | orchestrator | Saturday 27 September 2025 22:09:22 +0000 (0:00:00.589) 0:09:15.985 **** 2025-09-27 22:10:27.485656 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.485660 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.485664 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.485668 | orchestrator | 2025-09-27 22:10:27.485672 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-27 22:10:27.485677 | orchestrator | Saturday 27 September 2025 22:09:23 +0000 (0:00:00.341) 0:09:16.326 **** 2025-09-27 22:10:27.485681 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.485685 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.485689 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.485693 | orchestrator | 2025-09-27 22:10:27.485697 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-27 22:10:27.485701 | orchestrator | Saturday 27 September 2025 22:09:23 +0000 (0:00:00.304) 0:09:16.631 **** 2025-09-27 22:10:27.485705 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.485709 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.485713 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.485717 | orchestrator | 2025-09-27 22:10:27.485721 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-27 22:10:27.485725 | orchestrator | Saturday 27 September 2025 22:09:23 +0000 (0:00:00.287) 0:09:16.919 **** 2025-09-27 22:10:27.485729 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.485733 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.485737 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.485741 | orchestrator | 2025-09-27 22:10:27.485746 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-27 22:10:27.485752 | orchestrator | Saturday 27 September 2025 22:09:24 +0000 (0:00:00.533) 0:09:17.453 **** 2025-09-27 22:10:27.485756 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.485761 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.485765 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.485769 | orchestrator | 2025-09-27 22:10:27.485773 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-27 22:10:27.485777 | orchestrator | Saturday 27 September 2025 22:09:24 +0000 (0:00:00.334) 0:09:17.787 **** 2025-09-27 22:10:27.485781 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.485785 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.485789 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.485793 | orchestrator | 2025-09-27 22:10:27.485797 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-27 22:10:27.485801 | orchestrator | Saturday 27 September 2025 22:09:25 +0000 (0:00:00.550) 0:09:18.338 **** 2025-09-27 22:10:27.485805 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.485809 | orchestrator | 2025-09-27 22:10:27.485813 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-27 22:10:27.485817 | orchestrator | Saturday 27 September 2025 22:09:26 +0000 (0:00:00.805) 0:09:19.143 **** 2025-09-27 22:10:27.485821 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:10:27.485825 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-27 22:10:27.485829 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-27 22:10:27.485834 | orchestrator | 2025-09-27 22:10:27.485838 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-27 22:10:27.485842 | orchestrator | Saturday 27 September 2025 22:09:28 +0000 (0:00:02.132) 0:09:21.276 **** 2025-09-27 22:10:27.485846 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-27 22:10:27.485852 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-27 22:10:27.485857 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.485864 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-27 22:10:27.485868 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-27 22:10:27.485872 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.485876 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-27 22:10:27.485880 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-27 22:10:27.485884 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.485888 | orchestrator | 2025-09-27 22:10:27.485893 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-27 22:10:27.485897 | orchestrator | Saturday 27 September 2025 22:09:29 +0000 (0:00:01.220) 0:09:22.496 **** 2025-09-27 22:10:27.485901 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.485905 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.485909 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.485913 | orchestrator | 2025-09-27 22:10:27.485917 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-27 22:10:27.485921 | orchestrator | Saturday 27 September 2025 22:09:29 +0000 (0:00:00.310) 0:09:22.807 **** 2025-09-27 22:10:27.485925 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.485929 | orchestrator | 2025-09-27 22:10:27.485933 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-27 22:10:27.485937 | orchestrator | Saturday 27 September 2025 22:09:30 +0000 (0:00:00.819) 0:09:23.626 **** 2025-09-27 22:10:27.485941 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-27 22:10:27.485946 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-27 22:10:27.485950 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-27 22:10:27.485954 | orchestrator | 2025-09-27 22:10:27.485958 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-27 22:10:27.485962 | orchestrator | Saturday 27 September 2025 22:09:31 +0000 (0:00:00.802) 0:09:24.429 **** 2025-09-27 22:10:27.485966 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:10:27.485971 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-27 22:10:27.485975 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:10:27.485979 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-27 22:10:27.485983 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:10:27.485987 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-27 22:10:27.485991 | orchestrator | 2025-09-27 22:10:27.485995 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-27 22:10:27.485999 | orchestrator | Saturday 27 September 2025 22:09:36 +0000 (0:00:04.679) 0:09:29.109 **** 2025-09-27 22:10:27.486003 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:10:27.486007 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-27 22:10:27.486012 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:10:27.486048 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-27 22:10:27.486052 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:10:27.486056 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-27 22:10:27.486064 | orchestrator | 2025-09-27 22:10:27.486068 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-27 22:10:27.486072 | orchestrator | Saturday 27 September 2025 22:09:38 +0000 (0:00:02.673) 0:09:31.782 **** 2025-09-27 22:10:27.486076 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-27 22:10:27.486080 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.486084 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-27 22:10:27.486088 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.486093 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-27 22:10:27.486097 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.486101 | orchestrator | 2025-09-27 22:10:27.486105 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-27 22:10:27.486119 | orchestrator | Saturday 27 September 2025 22:09:39 +0000 (0:00:01.210) 0:09:32.993 **** 2025-09-27 22:10:27.486123 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-27 22:10:27.486127 | orchestrator | 2025-09-27 22:10:27.486132 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-27 22:10:27.486136 | orchestrator | Saturday 27 September 2025 22:09:40 +0000 (0:00:00.258) 0:09:33.251 **** 2025-09-27 22:10:27.486140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-27 22:10:27.486148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-27 22:10:27.486152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-27 22:10:27.486156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-27 22:10:27.486160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-27 22:10:27.486165 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.486169 | orchestrator | 2025-09-27 22:10:27.486173 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-27 22:10:27.486177 | orchestrator | Saturday 27 September 2025 22:09:41 +0000 (0:00:00.843) 0:09:34.095 **** 2025-09-27 22:10:27.486181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-27 22:10:27.486185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-27 22:10:27.486189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-27 22:10:27.486193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-27 22:10:27.486197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-27 22:10:27.486202 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.486206 | orchestrator | 2025-09-27 22:10:27.486210 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-27 22:10:27.486214 | orchestrator | Saturday 27 September 2025 22:09:41 +0000 (0:00:00.800) 0:09:34.896 **** 2025-09-27 22:10:27.486218 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-27 22:10:27.486222 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-27 22:10:27.486226 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-27 22:10:27.486234 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-27 22:10:27.486238 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-27 22:10:27.486242 | orchestrator | 2025-09-27 22:10:27.486246 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-27 22:10:27.486250 | orchestrator | Saturday 27 September 2025 22:10:12 +0000 (0:00:30.771) 0:10:05.668 **** 2025-09-27 22:10:27.486254 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.486258 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.486262 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.486266 | orchestrator | 2025-09-27 22:10:27.486270 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-27 22:10:27.486278 | orchestrator | Saturday 27 September 2025 22:10:13 +0000 (0:00:00.797) 0:10:06.465 **** 2025-09-27 22:10:27.486282 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.486286 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.486290 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.486294 | orchestrator | 2025-09-27 22:10:27.486299 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-27 22:10:27.486303 | orchestrator | Saturday 27 September 2025 22:10:13 +0000 (0:00:00.333) 0:10:06.799 **** 2025-09-27 22:10:27.486307 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.486311 | orchestrator | 2025-09-27 22:10:27.486315 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-27 22:10:27.486319 | orchestrator | Saturday 27 September 2025 22:10:14 +0000 (0:00:00.508) 0:10:07.308 **** 2025-09-27 22:10:27.486323 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.486327 | orchestrator | 2025-09-27 22:10:27.486331 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-27 22:10:27.486335 | orchestrator | Saturday 27 September 2025 22:10:15 +0000 (0:00:00.785) 0:10:08.094 **** 2025-09-27 22:10:27.486340 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.486344 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.486348 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.486352 | orchestrator | 2025-09-27 22:10:27.486356 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-27 22:10:27.486360 | orchestrator | Saturday 27 September 2025 22:10:16 +0000 (0:00:01.327) 0:10:09.421 **** 2025-09-27 22:10:27.486364 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.486368 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.486372 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.486377 | orchestrator | 2025-09-27 22:10:27.486383 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-27 22:10:27.486387 | orchestrator | Saturday 27 September 2025 22:10:17 +0000 (0:00:01.171) 0:10:10.593 **** 2025-09-27 22:10:27.486391 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:10:27.486395 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:10:27.486399 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:10:27.486404 | orchestrator | 2025-09-27 22:10:27.486408 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-27 22:10:27.486412 | orchestrator | Saturday 27 September 2025 22:10:19 +0000 (0:00:01.966) 0:10:12.560 **** 2025-09-27 22:10:27.486416 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-27 22:10:27.486420 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-27 22:10:27.486428 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-27 22:10:27.486432 | orchestrator | 2025-09-27 22:10:27.486436 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-27 22:10:27.486440 | orchestrator | Saturday 27 September 2025 22:10:21 +0000 (0:00:02.416) 0:10:14.976 **** 2025-09-27 22:10:27.486444 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.486448 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.486453 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.486457 | orchestrator | 2025-09-27 22:10:27.486461 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-27 22:10:27.486465 | orchestrator | Saturday 27 September 2025 22:10:22 +0000 (0:00:00.608) 0:10:15.585 **** 2025-09-27 22:10:27.486469 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:10:27.486473 | orchestrator | 2025-09-27 22:10:27.486477 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-27 22:10:27.486481 | orchestrator | Saturday 27 September 2025 22:10:23 +0000 (0:00:00.519) 0:10:16.105 **** 2025-09-27 22:10:27.486485 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.486489 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.486493 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.486497 | orchestrator | 2025-09-27 22:10:27.486501 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-27 22:10:27.486505 | orchestrator | Saturday 27 September 2025 22:10:23 +0000 (0:00:00.322) 0:10:16.427 **** 2025-09-27 22:10:27.486510 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.486514 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:10:27.486518 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:10:27.486522 | orchestrator | 2025-09-27 22:10:27.486526 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-27 22:10:27.486530 | orchestrator | Saturday 27 September 2025 22:10:23 +0000 (0:00:00.569) 0:10:16.997 **** 2025-09-27 22:10:27.486534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 22:10:27.486538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 22:10:27.486542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 22:10:27.486546 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:10:27.486550 | orchestrator | 2025-09-27 22:10:27.486554 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-27 22:10:27.486559 | orchestrator | Saturday 27 September 2025 22:10:24 +0000 (0:00:00.627) 0:10:17.624 **** 2025-09-27 22:10:27.486563 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:10:27.486567 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:10:27.486571 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:10:27.486575 | orchestrator | 2025-09-27 22:10:27.486579 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:10:27.486586 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-09-27 22:10:27.486593 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-27 22:10:27.486599 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-27 22:10:27.486606 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-09-27 22:10:27.486612 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-27 22:10:27.486622 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-27 22:10:27.486629 | orchestrator | 2025-09-27 22:10:27.486635 | orchestrator | 2025-09-27 22:10:27.486639 | orchestrator | 2025-09-27 22:10:27.486643 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:10:27.486647 | orchestrator | Saturday 27 September 2025 22:10:24 +0000 (0:00:00.260) 0:10:17.885 **** 2025-09-27 22:10:27.486651 | orchestrator | =============================================================================== 2025-09-27 22:10:27.486656 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 46.50s 2025-09-27 22:10:27.486660 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 44.65s 2025-09-27 22:10:27.486667 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.77s 2025-09-27 22:10:27.486671 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.25s 2025-09-27 22:10:27.486675 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.81s 2025-09-27 22:10:27.486679 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.79s 2025-09-27 22:10:27.486683 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.18s 2025-09-27 22:10:27.486687 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.52s 2025-09-27 22:10:27.486691 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.48s 2025-09-27 22:10:27.486695 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.43s 2025-09-27 22:10:27.486699 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.34s 2025-09-27 22:10:27.486703 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.24s 2025-09-27 22:10:27.486707 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.68s 2025-09-27 22:10:27.486712 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.04s 2025-09-27 22:10:27.486716 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.69s 2025-09-27 22:10:27.486720 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.64s 2025-09-27 22:10:27.486724 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.56s 2025-09-27 22:10:27.486728 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.56s 2025-09-27 22:10:27.486732 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.29s 2025-09-27 22:10:27.486736 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.19s 2025-09-27 22:10:27.486740 | orchestrator | 2025-09-27 22:10:27 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:10:27.486744 | orchestrator | 2025-09-27 22:10:27 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:10:30.508712 | orchestrator | 2025-09-27 22:10:30 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:10:30.511551 | orchestrator | 2025-09-27 22:10:30 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:10:30.511571 | orchestrator | 2025-09-27 22:10:30 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:10:30.511578 | orchestrator | 2025-09-27 22:10:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:10:33.533965 | orchestrator | 2025-09-27 22:10:33 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:10:33.537459 | orchestrator | 2025-09-27 22:10:33 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:10:33.540699 | orchestrator | 2025-09-27 22:10:33 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:10:33.540780 | orchestrator | 2025-09-27 22:10:33 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:10:36.582454 | orchestrator | 2025-09-27 22:10:36 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:10:36.583947 | orchestrator | 2025-09-27 22:10:36 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:10:36.586649 | orchestrator | 2025-09-27 22:10:36 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:10:36.586721 | orchestrator | 2025-09-27 22:10:36 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:10:39.641808 | orchestrator | 2025-09-27 22:10:39 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:10:39.643619 | orchestrator | 2025-09-27 22:10:39 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:10:39.646128 | orchestrator | 2025-09-27 22:10:39 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:10:39.646250 | orchestrator | 2025-09-27 22:10:39 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:10:42.702089 | orchestrator | 2025-09-27 22:10:42 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:10:42.702464 | orchestrator | 2025-09-27 22:10:42 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:10:42.703684 | orchestrator | 2025-09-27 22:10:42 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:10:42.703725 | orchestrator | 2025-09-27 22:10:42 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:10:45.749616 | orchestrator | 2025-09-27 22:10:45 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:10:45.750556 | orchestrator | 2025-09-27 22:10:45 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:10:45.751979 | orchestrator | 2025-09-27 22:10:45 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:10:45.752026 | orchestrator | 2025-09-27 22:10:45 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:10:48.799820 | orchestrator | 2025-09-27 22:10:48 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:10:48.799956 | orchestrator | 2025-09-27 22:10:48 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:10:48.802262 | orchestrator | 2025-09-27 22:10:48 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:10:48.802549 | orchestrator | 2025-09-27 22:10:48 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:10:51.844076 | orchestrator | 2025-09-27 22:10:51 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:10:51.846633 | orchestrator | 2025-09-27 22:10:51 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:10:51.849992 | orchestrator | 2025-09-27 22:10:51 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:10:51.850071 | orchestrator | 2025-09-27 22:10:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:10:54.897421 | orchestrator | 2025-09-27 22:10:54 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:10:54.899882 | orchestrator | 2025-09-27 22:10:54 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:10:54.902749 | orchestrator | 2025-09-27 22:10:54 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:10:54.902938 | orchestrator | 2025-09-27 22:10:54 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:10:57.942325 | orchestrator | 2025-09-27 22:10:57 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:10:57.943613 | orchestrator | 2025-09-27 22:10:57 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:10:57.945928 | orchestrator | 2025-09-27 22:10:57 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:10:57.945974 | orchestrator | 2025-09-27 22:10:57 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:11:00.984788 | orchestrator | 2025-09-27 22:11:00 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:11:00.985726 | orchestrator | 2025-09-27 22:11:00 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:11:00.986702 | orchestrator | 2025-09-27 22:11:00 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:11:00.986727 | orchestrator | 2025-09-27 22:11:00 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:11:04.032653 | orchestrator | 2025-09-27 22:11:04 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:11:04.033993 | orchestrator | 2025-09-27 22:11:04 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:11:04.036536 | orchestrator | 2025-09-27 22:11:04 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:11:04.036674 | orchestrator | 2025-09-27 22:11:04 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:11:07.074692 | orchestrator | 2025-09-27 22:11:07 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:11:07.074797 | orchestrator | 2025-09-27 22:11:07 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:11:07.075702 | orchestrator | 2025-09-27 22:11:07 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:11:07.075733 | orchestrator | 2025-09-27 22:11:07 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:11:10.125020 | orchestrator | 2025-09-27 22:11:10 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:11:10.126589 | orchestrator | 2025-09-27 22:11:10 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:11:10.129009 | orchestrator | 2025-09-27 22:11:10 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:11:10.129351 | orchestrator | 2025-09-27 22:11:10 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:11:13.178943 | orchestrator | 2025-09-27 22:11:13 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:11:13.180281 | orchestrator | 2025-09-27 22:11:13 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:11:13.183297 | orchestrator | 2025-09-27 22:11:13 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:11:13.183350 | orchestrator | 2025-09-27 22:11:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:11:16.243055 | orchestrator | 2025-09-27 22:11:16 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:11:16.244011 | orchestrator | 2025-09-27 22:11:16 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:11:16.246538 | orchestrator | 2025-09-27 22:11:16 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:11:16.246589 | orchestrator | 2025-09-27 22:11:16 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:11:19.295295 | orchestrator | 2025-09-27 22:11:19 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:11:19.296337 | orchestrator | 2025-09-27 22:11:19 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:11:19.297610 | orchestrator | 2025-09-27 22:11:19 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:11:19.298059 | orchestrator | 2025-09-27 22:11:19 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:11:22.354771 | orchestrator | 2025-09-27 22:11:22 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:11:22.355730 | orchestrator | 2025-09-27 22:11:22 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:11:22.357249 | orchestrator | 2025-09-27 22:11:22 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:11:22.358191 | orchestrator | 2025-09-27 22:11:22 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:11:25.400119 | orchestrator | 2025-09-27 22:11:25 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:11:25.401565 | orchestrator | 2025-09-27 22:11:25 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:11:25.404244 | orchestrator | 2025-09-27 22:11:25 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:11:25.404288 | orchestrator | 2025-09-27 22:11:25 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:11:28.449807 | orchestrator | 2025-09-27 22:11:28 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:11:28.451218 | orchestrator | 2025-09-27 22:11:28 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:11:28.453305 | orchestrator | 2025-09-27 22:11:28 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:11:28.453349 | orchestrator | 2025-09-27 22:11:28 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:11:31.494996 | orchestrator | 2025-09-27 22:11:31 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:11:31.497111 | orchestrator | 2025-09-27 22:11:31 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:11:31.499617 | orchestrator | 2025-09-27 22:11:31 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:11:31.499654 | orchestrator | 2025-09-27 22:11:31 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:11:34.539386 | orchestrator | 2025-09-27 22:11:34 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state STARTED 2025-09-27 22:11:34.540666 | orchestrator | 2025-09-27 22:11:34 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:11:34.542564 | orchestrator | 2025-09-27 22:11:34 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:11:34.542625 | orchestrator | 2025-09-27 22:11:34 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:11:37.598741 | orchestrator | 2025-09-27 22:11:37 | INFO  | Task 9590aac2-435d-4c5e-a040-a5b3131686dc is in state SUCCESS 2025-09-27 22:11:37.599810 | orchestrator | 2025-09-27 22:11:37.599857 | orchestrator | 2025-09-27 22:11:37.599920 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:11:37.599935 | orchestrator | 2025-09-27 22:11:37.599947 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:11:37.599960 | orchestrator | Saturday 27 September 2025 22:08:42 +0000 (0:00:00.280) 0:00:00.280 **** 2025-09-27 22:11:37.599971 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:11:37.599985 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:11:37.600022 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:11:37.600035 | orchestrator | 2025-09-27 22:11:37.600046 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:11:37.600107 | orchestrator | Saturday 27 September 2025 22:08:42 +0000 (0:00:00.294) 0:00:00.575 **** 2025-09-27 22:11:37.600121 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-27 22:11:37.600133 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-27 22:11:37.600143 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-27 22:11:37.600153 | orchestrator | 2025-09-27 22:11:37.600163 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-27 22:11:37.600174 | orchestrator | 2025-09-27 22:11:37.600184 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-27 22:11:37.600195 | orchestrator | Saturday 27 September 2025 22:08:43 +0000 (0:00:00.403) 0:00:00.978 **** 2025-09-27 22:11:37.600207 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:11:37.600218 | orchestrator | 2025-09-27 22:11:37.600229 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-27 22:11:37.600240 | orchestrator | Saturday 27 September 2025 22:08:43 +0000 (0:00:00.483) 0:00:01.462 **** 2025-09-27 22:11:37.600251 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-27 22:11:37.600261 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-27 22:11:37.600273 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-27 22:11:37.600284 | orchestrator | 2025-09-27 22:11:37.600295 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-27 22:11:37.600306 | orchestrator | Saturday 27 September 2025 22:08:44 +0000 (0:00:00.662) 0:00:02.124 **** 2025-09-27 22:11:37.600322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 22:11:37.600338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 22:11:37.600388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 22:11:37.600418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 22:11:37.600433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 22:11:37.600452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 22:11:37.600465 | orchestrator | 2025-09-27 22:11:37.600477 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-27 22:11:37.600495 | orchestrator | Saturday 27 September 2025 22:08:46 +0000 (0:00:01.632) 0:00:03.757 **** 2025-09-27 22:11:37.600507 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:11:37.600527 | orchestrator | 2025-09-27 22:11:37.600539 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-27 22:11:37.600551 | orchestrator | Saturday 27 September 2025 22:08:46 +0000 (0:00:00.488) 0:00:04.245 **** 2025-09-27 22:11:37.600576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 22:11:37.600590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 22:11:37.600602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 22:11:37.600615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 22:11:37.600642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 22:11:37.600664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 22:11:37.600677 | orchestrator | 2025-09-27 22:11:37.600688 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-27 22:11:37.600701 | orchestrator | Saturday 27 September 2025 22:08:49 +0000 (0:00:02.527) 0:00:06.773 **** 2025-09-27 22:11:37.600713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-27 22:11:37.600725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-27 22:11:37.600746 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:11:37.600774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-27 22:11:37.600798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-27 22:11:37.600811 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:11:37.600824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-27 22:11:37.600836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-27 22:11:37.600858 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:11:37.600869 | orchestrator | 2025-09-27 22:11:37.600880 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-27 22:11:37.600892 | orchestrator | Saturday 27 September 2025 22:08:50 +0000 (0:00:00.935) 0:00:07.708 **** 2025-09-27 22:11:37.600909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-27 22:11:37.600931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-27 22:11:37.600944 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:11:37.600956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-27 22:11:37.600968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-27 22:11:37.600993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-27 22:11:37.601015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-27 22:11:37.601027 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:11:37.601040 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:11:37.601051 | orchestrator | 2025-09-27 22:11:37.601090 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-27 22:11:37.601098 | orchestrator | Saturday 27 September 2025 22:08:51 +0000 (0:00:01.116) 0:00:08.824 **** 2025-09-27 22:11:37.601105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 22:11:37.601115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 22:11:37.601141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 22:11:37.601162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 22:11:37.601189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 22:11:37.601213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 22:11:37.601230 | orchestrator | 2025-09-27 22:11:37.601237 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-27 22:11:37.601243 | orchestrator | Saturday 27 September 2025 22:08:53 +0000 (0:00:02.607) 0:00:11.432 **** 2025-09-27 22:11:37.601250 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:11:37.601257 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:11:37.601264 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:11:37.601270 | orchestrator | 2025-09-27 22:11:37.601277 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-27 22:11:37.601284 | orchestrator | Saturday 27 September 2025 22:08:57 +0000 (0:00:03.300) 0:00:14.732 **** 2025-09-27 22:11:37.601290 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:11:37.601297 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:11:37.601303 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:11:37.601310 | orchestrator | 2025-09-27 22:11:37.601317 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-27 22:11:37.601323 | orchestrator | Saturday 27 September 2025 22:08:58 +0000 (0:00:01.869) 0:00:16.602 **** 2025-09-27 22:11:37.601334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 22:11:37.601575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 22:11:37.601598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 22:11:37.601611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 22:11:37.601638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 22:11:37.601661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 22:11:37.601674 | orchestrator | 2025-09-27 22:11:37.601684 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-27 22:11:37.601696 | orchestrator | Saturday 27 September 2025 22:09:01 +0000 (0:00:02.315) 0:00:18.917 **** 2025-09-27 22:11:37.601708 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:11:37.601719 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:11:37.601731 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:11:37.601742 | orchestrator | 2025-09-27 22:11:37.601754 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-27 22:11:37.601765 | orchestrator | Saturday 27 September 2025 22:09:01 +0000 (0:00:00.301) 0:00:19.219 **** 2025-09-27 22:11:37.601774 | orchestrator | 2025-09-27 22:11:37.601781 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-27 22:11:37.601788 | orchestrator | Saturday 27 September 2025 22:09:01 +0000 (0:00:00.064) 0:00:19.284 **** 2025-09-27 22:11:37.601795 | orchestrator | 2025-09-27 22:11:37.601807 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-27 22:11:37.601814 | orchestrator | Saturday 27 September 2025 22:09:01 +0000 (0:00:00.063) 0:00:19.347 **** 2025-09-27 22:11:37.601820 | orchestrator | 2025-09-27 22:11:37.601827 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-27 22:11:37.601833 | orchestrator | Saturday 27 September 2025 22:09:01 +0000 (0:00:00.071) 0:00:19.419 **** 2025-09-27 22:11:37.601840 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:11:37.601847 | orchestrator | 2025-09-27 22:11:37.601853 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-27 22:11:37.601860 | orchestrator | Saturday 27 September 2025 22:09:01 +0000 (0:00:00.208) 0:00:19.628 **** 2025-09-27 22:11:37.601866 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:11:37.601873 | orchestrator | 2025-09-27 22:11:37.601879 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-27 22:11:37.601886 | orchestrator | Saturday 27 September 2025 22:09:02 +0000 (0:00:00.595) 0:00:20.223 **** 2025-09-27 22:11:37.601893 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:11:37.601899 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:11:37.601906 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:11:37.601912 | orchestrator | 2025-09-27 22:11:37.601919 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-27 22:11:37.601925 | orchestrator | Saturday 27 September 2025 22:10:05 +0000 (0:01:02.862) 0:01:23.086 **** 2025-09-27 22:11:37.601932 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:11:37.601939 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:11:37.601945 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:11:37.601952 | orchestrator | 2025-09-27 22:11:37.601958 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-27 22:11:37.601965 | orchestrator | Saturday 27 September 2025 22:11:25 +0000 (0:01:20.083) 0:02:43.169 **** 2025-09-27 22:11:37.601972 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:11:37.601979 | orchestrator | 2025-09-27 22:11:37.601985 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-27 22:11:37.601992 | orchestrator | Saturday 27 September 2025 22:11:26 +0000 (0:00:00.536) 0:02:43.706 **** 2025-09-27 22:11:37.601999 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:11:37.602006 | orchestrator | 2025-09-27 22:11:37.602049 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-27 22:11:37.602088 | orchestrator | Saturday 27 September 2025 22:11:28 +0000 (0:00:02.788) 0:02:46.494 **** 2025-09-27 22:11:37.602097 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:11:37.602103 | orchestrator | 2025-09-27 22:11:37.602111 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-27 22:11:37.602122 | orchestrator | Saturday 27 September 2025 22:11:31 +0000 (0:00:02.358) 0:02:48.852 **** 2025-09-27 22:11:37.602133 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:11:37.602144 | orchestrator | 2025-09-27 22:11:37.602155 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-27 22:11:37.602173 | orchestrator | Saturday 27 September 2025 22:11:33 +0000 (0:00:02.705) 0:02:51.557 **** 2025-09-27 22:11:37.602184 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:11:37.602196 | orchestrator | 2025-09-27 22:11:37.602205 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:11:37.602214 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-27 22:11:37.602223 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-27 22:11:37.602232 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-27 22:11:37.602245 | orchestrator | 2025-09-27 22:11:37.602252 | orchestrator | 2025-09-27 22:11:37.602260 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:11:37.602273 | orchestrator | Saturday 27 September 2025 22:11:36 +0000 (0:00:02.746) 0:02:54.304 **** 2025-09-27 22:11:37.602281 | orchestrator | =============================================================================== 2025-09-27 22:11:37.602288 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 80.08s 2025-09-27 22:11:37.602295 | orchestrator | opensearch : Restart opensearch container ------------------------------ 62.86s 2025-09-27 22:11:37.602303 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.30s 2025-09-27 22:11:37.602310 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.79s 2025-09-27 22:11:37.602318 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.75s 2025-09-27 22:11:37.602325 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.71s 2025-09-27 22:11:37.602333 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.61s 2025-09-27 22:11:37.602340 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.53s 2025-09-27 22:11:37.602347 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.36s 2025-09-27 22:11:37.602354 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.32s 2025-09-27 22:11:37.602361 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.87s 2025-09-27 22:11:37.602367 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.63s 2025-09-27 22:11:37.602374 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.12s 2025-09-27 22:11:37.602380 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.94s 2025-09-27 22:11:37.602387 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.66s 2025-09-27 22:11:37.602393 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.60s 2025-09-27 22:11:37.602400 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2025-09-27 22:11:37.602406 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2025-09-27 22:11:37.602413 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.48s 2025-09-27 22:11:37.602419 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2025-09-27 22:11:37.602426 | orchestrator | 2025-09-27 22:11:37 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:11:37.602433 | orchestrator | 2025-09-27 22:11:37 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:11:37.602440 | orchestrator | 2025-09-27 22:11:37 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:11:40.645693 | orchestrator | 2025-09-27 22:11:40 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:11:40.647187 | orchestrator | 2025-09-27 22:11:40 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:11:40.647223 | orchestrator | 2025-09-27 22:11:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:11:43.689443 | orchestrator | 2025-09-27 22:11:43 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:11:43.690682 | orchestrator | 2025-09-27 22:11:43 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:11:43.690762 | orchestrator | 2025-09-27 22:11:43 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:11:46.738222 | orchestrator | 2025-09-27 22:11:46 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state STARTED 2025-09-27 22:11:46.738839 | orchestrator | 2025-09-27 22:11:46 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:11:46.739047 | orchestrator | 2025-09-27 22:11:46 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:11:49.782848 | orchestrator | 2025-09-27 22:11:49 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:11:49.784702 | orchestrator | 2025-09-27 22:11:49 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:11:49.790075 | orchestrator | 2025-09-27 22:11:49 | INFO  | Task 20c6176b-f843-4e93-8995-5edb1859fcaa is in state SUCCESS 2025-09-27 22:11:49.791709 | orchestrator | 2025-09-27 22:11:49.791743 | orchestrator | 2025-09-27 22:11:49.791752 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-27 22:11:49.791759 | orchestrator | 2025-09-27 22:11:49.791766 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-27 22:11:49.791774 | orchestrator | Saturday 27 September 2025 22:08:42 +0000 (0:00:00.100) 0:00:00.100 **** 2025-09-27 22:11:49.791781 | orchestrator | ok: [localhost] => { 2025-09-27 22:11:49.791789 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-27 22:11:49.791796 | orchestrator | } 2025-09-27 22:11:49.791803 | orchestrator | 2025-09-27 22:11:49.791810 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-27 22:11:49.791817 | orchestrator | Saturday 27 September 2025 22:08:42 +0000 (0:00:00.053) 0:00:00.154 **** 2025-09-27 22:11:49.791823 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-27 22:11:49.791832 | orchestrator | ...ignoring 2025-09-27 22:11:49.791839 | orchestrator | 2025-09-27 22:11:49.791845 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-27 22:11:49.791852 | orchestrator | Saturday 27 September 2025 22:08:45 +0000 (0:00:02.830) 0:00:02.985 **** 2025-09-27 22:11:49.791858 | orchestrator | skipping: [localhost] 2025-09-27 22:11:49.791865 | orchestrator | 2025-09-27 22:11:49.791871 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-27 22:11:49.791878 | orchestrator | Saturday 27 September 2025 22:08:45 +0000 (0:00:00.055) 0:00:03.040 **** 2025-09-27 22:11:49.791884 | orchestrator | ok: [localhost] 2025-09-27 22:11:49.791891 | orchestrator | 2025-09-27 22:11:49.791897 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:11:49.791904 | orchestrator | 2025-09-27 22:11:49.791910 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:11:49.791917 | orchestrator | Saturday 27 September 2025 22:08:45 +0000 (0:00:00.157) 0:00:03.198 **** 2025-09-27 22:11:49.791924 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:11:49.791930 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:11:49.791937 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:11:49.791943 | orchestrator | 2025-09-27 22:11:49.791950 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:11:49.791956 | orchestrator | Saturday 27 September 2025 22:08:45 +0000 (0:00:00.307) 0:00:03.505 **** 2025-09-27 22:11:49.791963 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-27 22:11:49.791969 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-27 22:11:49.791976 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-27 22:11:49.791983 | orchestrator | 2025-09-27 22:11:49.791989 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-27 22:11:49.791996 | orchestrator | 2025-09-27 22:11:49.792002 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-27 22:11:49.792009 | orchestrator | Saturday 27 September 2025 22:08:46 +0000 (0:00:00.512) 0:00:04.018 **** 2025-09-27 22:11:49.792015 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-27 22:11:49.792022 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-27 22:11:49.792029 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-27 22:11:49.792080 | orchestrator | 2025-09-27 22:11:49.792088 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-27 22:11:49.792094 | orchestrator | Saturday 27 September 2025 22:08:46 +0000 (0:00:00.441) 0:00:04.460 **** 2025-09-27 22:11:49.792101 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:11:49.792108 | orchestrator | 2025-09-27 22:11:49.792115 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-27 22:11:49.792122 | orchestrator | Saturday 27 September 2025 22:08:47 +0000 (0:00:00.578) 0:00:05.039 **** 2025-09-27 22:11:49.792158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-27 22:11:49.792170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-27 22:11:49.792234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-27 22:11:49.792243 | orchestrator | 2025-09-27 22:11:49.792254 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-27 22:11:49.792261 | orchestrator | Saturday 27 September 2025 22:08:50 +0000 (0:00:02.802) 0:00:07.842 **** 2025-09-27 22:11:49.792268 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:11:49.792275 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:11:49.792282 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:11:49.792289 | orchestrator | 2025-09-27 22:11:49.792296 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-27 22:11:49.792303 | orchestrator | Saturday 27 September 2025 22:08:51 +0000 (0:00:00.831) 0:00:08.673 **** 2025-09-27 22:11:49.792310 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:11:49.792317 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:11:49.792324 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:11:49.792331 | orchestrator | 2025-09-27 22:11:49.792338 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-27 22:11:49.792345 | orchestrator | Saturday 27 September 2025 22:08:52 +0000 (0:00:01.451) 0:00:10.125 **** 2025-09-27 22:11:49.792370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-27 22:11:49.792392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-27 22:11:49.792401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-27 22:11:49.792413 | orchestrator | 2025-09-27 22:11:49.792420 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-27 22:11:49.792427 | orchestrator | Saturday 27 September 2025 22:08:56 +0000 (0:00:04.016) 0:00:14.141 **** 2025-09-27 22:11:49.792435 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:11:49.792442 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:11:49.792449 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:11:49.792456 | orchestrator | 2025-09-27 22:11:49.792463 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-27 22:11:49.792470 | orchestrator | Saturday 27 September 2025 22:08:57 +0000 (0:00:01.014) 0:00:15.156 **** 2025-09-27 22:11:49.792477 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:11:49.792484 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:11:49.792491 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:11:49.792498 | orchestrator | 2025-09-27 22:11:49.792505 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-27 22:11:49.792513 | orchestrator | Saturday 27 September 2025 22:09:02 +0000 (0:00:04.506) 0:00:19.663 **** 2025-09-27 22:11:49.792520 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:11:49.792527 | orchestrator | 2025-09-27 22:11:49.792534 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-27 22:11:49.792541 | orchestrator | Saturday 27 September 2025 22:09:02 +0000 (0:00:00.539) 0:00:20.202 **** 2025-09-27 22:11:49.792557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 22:11:49.792566 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:11:49.792574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 22:11:49.792584 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:11:49.792601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 22:11:49.792609 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:11:49.792617 | orchestrator | 2025-09-27 22:11:49.792624 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-27 22:11:49.792631 | orchestrator | Saturday 27 September 2025 22:09:05 +0000 (0:00:03.312) 0:00:23.514 **** 2025-09-27 22:11:49.792639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 22:11:49.792651 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:11:49.792665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 22:11:49.792673 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:11:49.792679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 22:11:49.792690 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:11:49.792697 | orchestrator | 2025-09-27 22:11:49.792703 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-27 22:11:49.792709 | orchestrator | Saturday 27 September 2025 22:09:08 +0000 (0:00:02.400) 0:00:25.915 **** 2025-09-27 22:11:49.792723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 22:11:49.792730 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:11:49.792741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 22:11:49.792753 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:11:49.792817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 22:11:49.792828 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:11:49.792834 | orchestrator | 2025-09-27 22:11:49.792840 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-27 22:11:49.792846 | orchestrator | Saturday 27 September 2025 22:09:11 +0000 (0:00:02.870) 0:00:28.785 **** 2025-09-27 22:11:49.792981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-27 22:11:49.792998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-27 22:11:49.793015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-27 22:11:49.793027 | orchestrator | 2025-09-27 22:11:49.793033 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-27 22:11:49.793040 | orchestrator | Saturday 27 September 2025 22:09:14 +0000 (0:00:03.299) 0:00:32.084 **** 2025-09-27 22:11:49.793064 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:11:49.793071 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:11:49.793077 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:11:49.793083 | orchestrator | 2025-09-27 22:11:49.793089 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-27 22:11:49.793095 | orchestrator | Saturday 27 September 2025 22:09:15 +0000 (0:00:00.858) 0:00:32.943 **** 2025-09-27 22:11:49.793102 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:11:49.793108 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:11:49.793114 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:11:49.793120 | orchestrator | 2025-09-27 22:11:49.793126 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-27 22:11:49.793132 | orchestrator | Saturday 27 September 2025 22:09:15 +0000 (0:00:00.511) 0:00:33.454 **** 2025-09-27 22:11:49.793138 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:11:49.793145 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:11:49.793151 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:11:49.793157 | orchestrator | 2025-09-27 22:11:49.793163 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-27 22:11:49.793169 | orchestrator | Saturday 27 September 2025 22:09:16 +0000 (0:00:00.318) 0:00:33.773 **** 2025-09-27 22:11:49.793176 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-27 22:11:49.793183 | orchestrator | ...ignoring 2025-09-27 22:11:49.793189 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-27 22:11:49.793195 | orchestrator | ...ignoring 2025-09-27 22:11:49.793201 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-27 22:11:49.793207 | orchestrator | ...ignoring 2025-09-27 22:11:49.793213 | orchestrator | 2025-09-27 22:11:49.793219 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-27 22:11:49.793226 | orchestrator | Saturday 27 September 2025 22:09:27 +0000 (0:00:10.970) 0:00:44.743 **** 2025-09-27 22:11:49.793232 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:11:49.793238 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:11:49.793244 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:11:49.793250 | orchestrator | 2025-09-27 22:11:49.793256 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-27 22:11:49.793262 | orchestrator | Saturday 27 September 2025 22:09:27 +0000 (0:00:00.404) 0:00:45.148 **** 2025-09-27 22:11:49.793268 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:11:49.793279 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:11:49.793285 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:11:49.793291 | orchestrator | 2025-09-27 22:11:49.793297 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-27 22:11:49.793303 | orchestrator | Saturday 27 September 2025 22:09:28 +0000 (0:00:00.599) 0:00:45.747 **** 2025-09-27 22:11:49.793309 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:11:49.793315 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:11:49.793321 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:11:49.793327 | orchestrator | 2025-09-27 22:11:49.793334 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-27 22:11:49.793340 | orchestrator | Saturday 27 September 2025 22:09:28 +0000 (0:00:00.466) 0:00:46.213 **** 2025-09-27 22:11:49.793346 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:11:49.793352 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:11:49.793358 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:11:49.793364 | orchestrator | 2025-09-27 22:11:49.793370 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-27 22:11:49.793376 | orchestrator | Saturday 27 September 2025 22:09:29 +0000 (0:00:00.445) 0:00:46.659 **** 2025-09-27 22:11:49.793382 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:11:49.793389 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:11:49.793395 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:11:49.793401 | orchestrator | 2025-09-27 22:11:49.793410 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-27 22:11:49.793416 | orchestrator | Saturday 27 September 2025 22:09:29 +0000 (0:00:00.421) 0:00:47.081 **** 2025-09-27 22:11:49.793426 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:11:49.793432 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:11:49.793438 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:11:49.793444 | orchestrator | 2025-09-27 22:11:49.793451 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-27 22:11:49.793457 | orchestrator | Saturday 27 September 2025 22:09:30 +0000 (0:00:00.834) 0:00:47.916 **** 2025-09-27 22:11:49.793463 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:11:49.793469 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:11:49.793475 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-27 22:11:49.793481 | orchestrator | 2025-09-27 22:11:49.793488 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-27 22:11:49.793494 | orchestrator | Saturday 27 September 2025 22:09:30 +0000 (0:00:00.385) 0:00:48.301 **** 2025-09-27 22:11:49.793500 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:11:49.793506 | orchestrator | 2025-09-27 22:11:49.793512 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-27 22:11:49.793518 | orchestrator | Saturday 27 September 2025 22:09:41 +0000 (0:00:10.583) 0:00:58.884 **** 2025-09-27 22:11:49.793524 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:11:49.793530 | orchestrator | 2025-09-27 22:11:49.793536 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-27 22:11:49.793542 | orchestrator | Saturday 27 September 2025 22:09:41 +0000 (0:00:00.134) 0:00:59.019 **** 2025-09-27 22:11:49.793549 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:11:49.793555 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:11:49.793561 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:11:49.793567 | orchestrator | 2025-09-27 22:11:49.793573 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-27 22:11:49.793579 | orchestrator | Saturday 27 September 2025 22:09:42 +0000 (0:00:00.988) 0:01:00.008 **** 2025-09-27 22:11:49.793585 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:11:49.793591 | orchestrator | 2025-09-27 22:11:49.793597 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-27 22:11:49.793603 | orchestrator | Saturday 27 September 2025 22:09:50 +0000 (0:00:07.652) 0:01:07.660 **** 2025-09-27 22:11:49.793614 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:11:49.793620 | orchestrator | 2025-09-27 22:11:49.793626 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-27 22:11:49.793632 | orchestrator | Saturday 27 September 2025 22:09:51 +0000 (0:00:01.724) 0:01:09.385 **** 2025-09-27 22:11:49.793638 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:11:49.793645 | orchestrator | 2025-09-27 22:11:49.793651 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-27 22:11:49.793657 | orchestrator | Saturday 27 September 2025 22:09:54 +0000 (0:00:02.552) 0:01:11.937 **** 2025-09-27 22:11:49.793663 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:11:49.793669 | orchestrator | 2025-09-27 22:11:49.793675 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-27 22:11:49.793681 | orchestrator | Saturday 27 September 2025 22:09:54 +0000 (0:00:00.136) 0:01:12.074 **** 2025-09-27 22:11:49.793687 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:11:49.793693 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:11:49.793699 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:11:49.793705 | orchestrator | 2025-09-27 22:11:49.793711 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-27 22:11:49.793717 | orchestrator | Saturday 27 September 2025 22:09:54 +0000 (0:00:00.334) 0:01:12.409 **** 2025-09-27 22:11:49.793724 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:11:49.793730 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-27 22:11:49.793736 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:11:49.793742 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:11:49.793748 | orchestrator | 2025-09-27 22:11:49.793754 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-27 22:11:49.793760 | orchestrator | skipping: no hosts matched 2025-09-27 22:11:49.793766 | orchestrator | 2025-09-27 22:11:49.793772 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-27 22:11:49.793778 | orchestrator | 2025-09-27 22:11:49.793784 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-27 22:11:49.793790 | orchestrator | Saturday 27 September 2025 22:09:55 +0000 (0:00:00.553) 0:01:12.963 **** 2025-09-27 22:11:49.793796 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:11:49.793802 | orchestrator | 2025-09-27 22:11:49.793808 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-27 22:11:49.793814 | orchestrator | Saturday 27 September 2025 22:10:11 +0000 (0:00:16.572) 0:01:29.535 **** 2025-09-27 22:11:49.793820 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:11:49.793826 | orchestrator | 2025-09-27 22:11:49.793833 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-27 22:11:49.793839 | orchestrator | Saturday 27 September 2025 22:10:32 +0000 (0:00:20.592) 0:01:50.128 **** 2025-09-27 22:11:49.793845 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:11:49.793851 | orchestrator | 2025-09-27 22:11:49.793857 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-27 22:11:49.793863 | orchestrator | 2025-09-27 22:11:49.793869 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-27 22:11:49.793875 | orchestrator | Saturday 27 September 2025 22:10:34 +0000 (0:00:02.188) 0:01:52.316 **** 2025-09-27 22:11:49.793881 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:11:49.793887 | orchestrator | 2025-09-27 22:11:49.793893 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-27 22:11:49.793899 | orchestrator | Saturday 27 September 2025 22:10:52 +0000 (0:00:17.490) 0:02:09.807 **** 2025-09-27 22:11:49.793905 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:11:49.793912 | orchestrator | 2025-09-27 22:11:49.793918 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-27 22:11:49.793927 | orchestrator | Saturday 27 September 2025 22:11:12 +0000 (0:00:20.569) 0:02:30.376 **** 2025-09-27 22:11:49.793933 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:11:49.793944 | orchestrator | 2025-09-27 22:11:49.793950 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-27 22:11:49.793956 | orchestrator | 2025-09-27 22:11:49.793965 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-27 22:11:49.793972 | orchestrator | Saturday 27 September 2025 22:11:15 +0000 (0:00:02.452) 0:02:32.829 **** 2025-09-27 22:11:49.793978 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:11:49.793984 | orchestrator | 2025-09-27 22:11:49.793990 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-27 22:11:49.793996 | orchestrator | Saturday 27 September 2025 22:11:27 +0000 (0:00:12.086) 0:02:44.916 **** 2025-09-27 22:11:49.794002 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:11:49.794008 | orchestrator | 2025-09-27 22:11:49.794041 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-27 22:11:49.794078 | orchestrator | Saturday 27 September 2025 22:11:31 +0000 (0:00:04.576) 0:02:49.493 **** 2025-09-27 22:11:49.794084 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:11:49.794090 | orchestrator | 2025-09-27 22:11:49.794097 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-27 22:11:49.794103 | orchestrator | 2025-09-27 22:11:49.794109 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-27 22:11:49.794115 | orchestrator | Saturday 27 September 2025 22:11:34 +0000 (0:00:02.692) 0:02:52.185 **** 2025-09-27 22:11:49.794121 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:11:49.794127 | orchestrator | 2025-09-27 22:11:49.794133 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-27 22:11:49.794139 | orchestrator | Saturday 27 September 2025 22:11:35 +0000 (0:00:00.552) 0:02:52.738 **** 2025-09-27 22:11:49.794145 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:11:49.794152 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:11:49.794158 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:11:49.794164 | orchestrator | 2025-09-27 22:11:49.794170 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-27 22:11:49.794176 | orchestrator | Saturday 27 September 2025 22:11:37 +0000 (0:00:02.494) 0:02:55.232 **** 2025-09-27 22:11:49.794182 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:11:49.794188 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:11:49.794195 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:11:49.794201 | orchestrator | 2025-09-27 22:11:49.794207 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-27 22:11:49.794213 | orchestrator | Saturday 27 September 2025 22:11:40 +0000 (0:00:02.402) 0:02:57.635 **** 2025-09-27 22:11:49.794219 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:11:49.794225 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:11:49.794231 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:11:49.794237 | orchestrator | 2025-09-27 22:11:49.794243 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-27 22:11:49.794252 | orchestrator | Saturday 27 September 2025 22:11:42 +0000 (0:00:02.360) 0:02:59.995 **** 2025-09-27 22:11:49.794262 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:11:49.794272 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:11:49.794282 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:11:49.794292 | orchestrator | 2025-09-27 22:11:49.794302 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-27 22:11:49.794313 | orchestrator | Saturday 27 September 2025 22:11:44 +0000 (0:00:02.255) 0:03:02.251 **** 2025-09-27 22:11:49.794324 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:11:49.794333 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:11:49.794343 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:11:49.794352 | orchestrator | 2025-09-27 22:11:49.794361 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-27 22:11:49.794372 | orchestrator | Saturday 27 September 2025 22:11:47 +0000 (0:00:02.835) 0:03:05.086 **** 2025-09-27 22:11:49.794382 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:11:49.794400 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:11:49.794410 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:11:49.794419 | orchestrator | 2025-09-27 22:11:49.794429 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:11:49.794440 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-27 22:11:49.794451 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-27 22:11:49.794464 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-27 22:11:49.794471 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-27 22:11:49.794477 | orchestrator | 2025-09-27 22:11:49.794483 | orchestrator | 2025-09-27 22:11:49.794490 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:11:49.794496 | orchestrator | Saturday 27 September 2025 22:11:47 +0000 (0:00:00.429) 0:03:05.516 **** 2025-09-27 22:11:49.794502 | orchestrator | =============================================================================== 2025-09-27 22:11:49.794508 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.16s 2025-09-27 22:11:49.794514 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 34.06s 2025-09-27 22:11:49.794521 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.09s 2025-09-27 22:11:49.794527 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.97s 2025-09-27 22:11:49.794533 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.58s 2025-09-27 22:11:49.794544 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.65s 2025-09-27 22:11:49.794556 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.64s 2025-09-27 22:11:49.794562 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.58s 2025-09-27 22:11:49.794568 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.51s 2025-09-27 22:11:49.794574 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.02s 2025-09-27 22:11:49.794580 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.31s 2025-09-27 22:11:49.794587 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.30s 2025-09-27 22:11:49.794593 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.87s 2025-09-27 22:11:49.794599 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.84s 2025-09-27 22:11:49.794605 | orchestrator | Check MariaDB service --------------------------------------------------- 2.83s 2025-09-27 22:11:49.794611 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.80s 2025-09-27 22:11:49.794617 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.69s 2025-09-27 22:11:49.794623 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.55s 2025-09-27 22:11:49.794629 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.49s 2025-09-27 22:11:49.794635 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.40s 2025-09-27 22:11:49.794642 | orchestrator | 2025-09-27 22:11:49 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:11:49.794648 | orchestrator | 2025-09-27 22:11:49 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:11:52.848324 | orchestrator | 2025-09-27 22:11:52 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:11:52.849324 | orchestrator | 2025-09-27 22:11:52 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:11:52.851780 | orchestrator | 2025-09-27 22:11:52 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:11:52.851925 | orchestrator | 2025-09-27 22:11:52 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:11:55.883479 | orchestrator | 2025-09-27 22:11:55 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:11:55.884088 | orchestrator | 2025-09-27 22:11:55 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:11:55.884918 | orchestrator | 2025-09-27 22:11:55 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:11:55.884938 | orchestrator | 2025-09-27 22:11:55 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:11:58.923882 | orchestrator | 2025-09-27 22:11:58 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:11:58.923989 | orchestrator | 2025-09-27 22:11:58 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:11:58.924413 | orchestrator | 2025-09-27 22:11:58 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:11:58.924438 | orchestrator | 2025-09-27 22:11:58 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:12:01.965014 | orchestrator | 2025-09-27 22:12:01 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:12:01.965221 | orchestrator | 2025-09-27 22:12:01 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:12:01.966272 | orchestrator | 2025-09-27 22:12:01 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:12:01.966297 | orchestrator | 2025-09-27 22:12:01 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:12:05.009282 | orchestrator | 2025-09-27 22:12:05 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:12:05.013629 | orchestrator | 2025-09-27 22:12:05 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:12:05.013937 | orchestrator | 2025-09-27 22:12:05 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:12:05.014452 | orchestrator | 2025-09-27 22:12:05 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:12:08.060734 | orchestrator | 2025-09-27 22:12:08 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:12:08.061480 | orchestrator | 2025-09-27 22:12:08 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:12:08.062885 | orchestrator | 2025-09-27 22:12:08 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:12:08.063215 | orchestrator | 2025-09-27 22:12:08 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:12:11.098513 | orchestrator | 2025-09-27 22:12:11 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:12:11.099748 | orchestrator | 2025-09-27 22:12:11 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:12:11.101447 | orchestrator | 2025-09-27 22:12:11 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:12:11.101506 | orchestrator | 2025-09-27 22:12:11 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:12:14.142669 | orchestrator | 2025-09-27 22:12:14 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:12:14.144782 | orchestrator | 2025-09-27 22:12:14 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:12:14.146411 | orchestrator | 2025-09-27 22:12:14 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:12:14.146462 | orchestrator | 2025-09-27 22:12:14 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:12:17.195696 | orchestrator | 2025-09-27 22:12:17 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:12:17.197132 | orchestrator | 2025-09-27 22:12:17 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:12:17.200016 | orchestrator | 2025-09-27 22:12:17 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:12:17.200754 | orchestrator | 2025-09-27 22:12:17 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:12:20.237391 | orchestrator | 2025-09-27 22:12:20 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:12:20.239782 | orchestrator | 2025-09-27 22:12:20 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:12:20.240831 | orchestrator | 2025-09-27 22:12:20 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:12:20.240888 | orchestrator | 2025-09-27 22:12:20 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:12:23.282994 | orchestrator | 2025-09-27 22:12:23 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:12:23.283461 | orchestrator | 2025-09-27 22:12:23 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:12:23.285728 | orchestrator | 2025-09-27 22:12:23 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:12:23.285740 | orchestrator | 2025-09-27 22:12:23 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:12:26.315929 | orchestrator | 2025-09-27 22:12:26 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:12:26.318919 | orchestrator | 2025-09-27 22:12:26 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:12:26.321399 | orchestrator | 2025-09-27 22:12:26 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:12:26.321456 | orchestrator | 2025-09-27 22:12:26 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:12:29.359658 | orchestrator | 2025-09-27 22:12:29 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:12:29.361064 | orchestrator | 2025-09-27 22:12:29 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:12:29.363618 | orchestrator | 2025-09-27 22:12:29 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:12:29.363655 | orchestrator | 2025-09-27 22:12:29 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:12:32.409220 | orchestrator | 2025-09-27 22:12:32 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:12:32.409831 | orchestrator | 2025-09-27 22:12:32 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:12:32.411068 | orchestrator | 2025-09-27 22:12:32 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:12:32.411186 | orchestrator | 2025-09-27 22:12:32 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:12:35.454377 | orchestrator | 2025-09-27 22:12:35 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:12:35.455713 | orchestrator | 2025-09-27 22:12:35 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:12:35.457477 | orchestrator | 2025-09-27 22:12:35 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state STARTED 2025-09-27 22:12:35.457547 | orchestrator | 2025-09-27 22:12:35 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:12:38.504318 | orchestrator | 2025-09-27 22:12:38 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:12:38.504416 | orchestrator | 2025-09-27 22:12:38 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:12:38.505046 | orchestrator | 2025-09-27 22:12:38 | INFO  | Task a683dec2-778f-4059-96d9-f9577d682d0d is in state STARTED 2025-09-27 22:12:38.506918 | orchestrator | 2025-09-27 22:12:38 | INFO  | Task 10ec07b6-1298-4fab-9f49-37eb0f12d0f3 is in state SUCCESS 2025-09-27 22:12:38.508486 | orchestrator | 2025-09-27 22:12:38.508525 | orchestrator | 2025-09-27 22:12:38.508650 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-27 22:12:38.508675 | orchestrator | 2025-09-27 22:12:38.508687 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-27 22:12:38.508699 | orchestrator | Saturday 27 September 2025 22:10:29 +0000 (0:00:00.444) 0:00:00.444 **** 2025-09-27 22:12:38.508710 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:12:38.508722 | orchestrator | 2025-09-27 22:12:38.508733 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-27 22:12:38.508745 | orchestrator | Saturday 27 September 2025 22:10:29 +0000 (0:00:00.449) 0:00:00.893 **** 2025-09-27 22:12:38.508755 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:12:38.508768 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:12:38.508779 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:12:38.508789 | orchestrator | 2025-09-27 22:12:38.508800 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-27 22:12:38.508811 | orchestrator | Saturday 27 September 2025 22:10:30 +0000 (0:00:00.666) 0:00:01.560 **** 2025-09-27 22:12:38.508822 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:12:38.508832 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:12:38.508843 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:12:38.508854 | orchestrator | 2025-09-27 22:12:38.508864 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-27 22:12:38.508875 | orchestrator | Saturday 27 September 2025 22:10:30 +0000 (0:00:00.240) 0:00:01.801 **** 2025-09-27 22:12:38.508885 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:12:38.508896 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:12:38.508907 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:12:38.508917 | orchestrator | 2025-09-27 22:12:38.508928 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-27 22:12:38.508939 | orchestrator | Saturday 27 September 2025 22:10:31 +0000 (0:00:00.676) 0:00:02.477 **** 2025-09-27 22:12:38.508949 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:12:38.508960 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:12:38.508970 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:12:38.508981 | orchestrator | 2025-09-27 22:12:38.508991 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-27 22:12:38.509002 | orchestrator | Saturday 27 September 2025 22:10:31 +0000 (0:00:00.277) 0:00:02.755 **** 2025-09-27 22:12:38.509045 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:12:38.509224 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:12:38.509242 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:12:38.509254 | orchestrator | 2025-09-27 22:12:38.509266 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-27 22:12:38.509278 | orchestrator | Saturday 27 September 2025 22:10:31 +0000 (0:00:00.274) 0:00:03.029 **** 2025-09-27 22:12:38.509291 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:12:38.509303 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:12:38.509315 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:12:38.509326 | orchestrator | 2025-09-27 22:12:38.509339 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-27 22:12:38.509374 | orchestrator | Saturday 27 September 2025 22:10:32 +0000 (0:00:00.316) 0:00:03.345 **** 2025-09-27 22:12:38.509388 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.509401 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:12:38.509413 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:12:38.509426 | orchestrator | 2025-09-27 22:12:38.509437 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-27 22:12:38.509448 | orchestrator | Saturday 27 September 2025 22:10:32 +0000 (0:00:00.429) 0:00:03.774 **** 2025-09-27 22:12:38.509458 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:12:38.509469 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:12:38.509480 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:12:38.509491 | orchestrator | 2025-09-27 22:12:38.509501 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-27 22:12:38.509512 | orchestrator | Saturday 27 September 2025 22:10:32 +0000 (0:00:00.260) 0:00:04.035 **** 2025-09-27 22:12:38.509523 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-27 22:12:38.509533 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-27 22:12:38.509544 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-27 22:12:38.509555 | orchestrator | 2025-09-27 22:12:38.509565 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-27 22:12:38.509576 | orchestrator | Saturday 27 September 2025 22:10:33 +0000 (0:00:00.655) 0:00:04.690 **** 2025-09-27 22:12:38.509586 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:12:38.509616 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:12:38.509627 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:12:38.509638 | orchestrator | 2025-09-27 22:12:38.509649 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-27 22:12:38.509660 | orchestrator | Saturday 27 September 2025 22:10:33 +0000 (0:00:00.379) 0:00:05.070 **** 2025-09-27 22:12:38.509671 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-27 22:12:38.509681 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-27 22:12:38.509698 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-27 22:12:38.509710 | orchestrator | 2025-09-27 22:12:38.509721 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-27 22:12:38.509732 | orchestrator | Saturday 27 September 2025 22:10:36 +0000 (0:00:02.055) 0:00:07.126 **** 2025-09-27 22:12:38.509742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-27 22:12:38.509754 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-27 22:12:38.509766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-27 22:12:38.509777 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.509788 | orchestrator | 2025-09-27 22:12:38.509799 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-27 22:12:38.509825 | orchestrator | Saturday 27 September 2025 22:10:36 +0000 (0:00:00.360) 0:00:07.486 **** 2025-09-27 22:12:38.509837 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.509852 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.509863 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.509882 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.509893 | orchestrator | 2025-09-27 22:12:38.509904 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-27 22:12:38.509914 | orchestrator | Saturday 27 September 2025 22:10:37 +0000 (0:00:00.787) 0:00:08.274 **** 2025-09-27 22:12:38.509927 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.509942 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.509953 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.509965 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.509976 | orchestrator | 2025-09-27 22:12:38.509986 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-27 22:12:38.509997 | orchestrator | Saturday 27 September 2025 22:10:37 +0000 (0:00:00.142) 0:00:08.417 **** 2025-09-27 22:12:38.510011 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '00a1d97aa86c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-27 22:10:34.625622', 'end': '2025-09-27 22:10:34.671690', 'delta': '0:00:00.046068', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['00a1d97aa86c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-27 22:12:38.510169 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '4d61232e3fd5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-27 22:10:35.292072', 'end': '2025-09-27 22:10:35.322057', 'delta': '0:00:00.029985', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4d61232e3fd5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-27 22:12:38.510194 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '20aef8f7dddd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-27 22:10:35.856845', 'end': '2025-09-27 22:10:35.901690', 'delta': '0:00:00.044845', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['20aef8f7dddd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-27 22:12:38.510214 | orchestrator | 2025-09-27 22:12:38.510224 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-27 22:12:38.510234 | orchestrator | Saturday 27 September 2025 22:10:37 +0000 (0:00:00.368) 0:00:08.785 **** 2025-09-27 22:12:38.510244 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:12:38.510253 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:12:38.510263 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:12:38.510273 | orchestrator | 2025-09-27 22:12:38.510282 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-27 22:12:38.510292 | orchestrator | Saturday 27 September 2025 22:10:38 +0000 (0:00:00.431) 0:00:09.217 **** 2025-09-27 22:12:38.510302 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-27 22:12:38.510311 | orchestrator | 2025-09-27 22:12:38.510321 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-27 22:12:38.510331 | orchestrator | Saturday 27 September 2025 22:10:39 +0000 (0:00:01.705) 0:00:10.923 **** 2025-09-27 22:12:38.510340 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.510350 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:12:38.510360 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:12:38.510369 | orchestrator | 2025-09-27 22:12:38.510379 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-27 22:12:38.510388 | orchestrator | Saturday 27 September 2025 22:10:40 +0000 (0:00:00.282) 0:00:11.205 **** 2025-09-27 22:12:38.510398 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.510408 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:12:38.510417 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:12:38.510427 | orchestrator | 2025-09-27 22:12:38.510436 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-27 22:12:38.510446 | orchestrator | Saturday 27 September 2025 22:10:40 +0000 (0:00:00.437) 0:00:11.642 **** 2025-09-27 22:12:38.510456 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.510465 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:12:38.510475 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:12:38.510484 | orchestrator | 2025-09-27 22:12:38.510494 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-27 22:12:38.510504 | orchestrator | Saturday 27 September 2025 22:10:41 +0000 (0:00:00.458) 0:00:12.101 **** 2025-09-27 22:12:38.510514 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:12:38.510523 | orchestrator | 2025-09-27 22:12:38.510533 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-27 22:12:38.510543 | orchestrator | Saturday 27 September 2025 22:10:41 +0000 (0:00:00.131) 0:00:12.233 **** 2025-09-27 22:12:38.510552 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.510562 | orchestrator | 2025-09-27 22:12:38.510571 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-27 22:12:38.510581 | orchestrator | Saturday 27 September 2025 22:10:41 +0000 (0:00:00.233) 0:00:12.467 **** 2025-09-27 22:12:38.510591 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.510600 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:12:38.510610 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:12:38.510619 | orchestrator | 2025-09-27 22:12:38.510629 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-27 22:12:38.510639 | orchestrator | Saturday 27 September 2025 22:10:41 +0000 (0:00:00.290) 0:00:12.758 **** 2025-09-27 22:12:38.510648 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.510658 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:12:38.510667 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:12:38.510677 | orchestrator | 2025-09-27 22:12:38.510686 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-27 22:12:38.510696 | orchestrator | Saturday 27 September 2025 22:10:42 +0000 (0:00:00.328) 0:00:13.086 **** 2025-09-27 22:12:38.510713 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.510722 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:12:38.510732 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:12:38.510742 | orchestrator | 2025-09-27 22:12:38.510751 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-27 22:12:38.510761 | orchestrator | Saturday 27 September 2025 22:10:42 +0000 (0:00:00.492) 0:00:13.578 **** 2025-09-27 22:12:38.510771 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.510780 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:12:38.510790 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:12:38.510799 | orchestrator | 2025-09-27 22:12:38.510809 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-27 22:12:38.510823 | orchestrator | Saturday 27 September 2025 22:10:42 +0000 (0:00:00.334) 0:00:13.912 **** 2025-09-27 22:12:38.510833 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.510843 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:12:38.510853 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:12:38.510862 | orchestrator | 2025-09-27 22:12:38.510872 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-27 22:12:38.510881 | orchestrator | Saturday 27 September 2025 22:10:43 +0000 (0:00:00.316) 0:00:14.229 **** 2025-09-27 22:12:38.510891 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.510901 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:12:38.510910 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:12:38.510920 | orchestrator | 2025-09-27 22:12:38.510930 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-27 22:12:38.510953 | orchestrator | Saturday 27 September 2025 22:10:43 +0000 (0:00:00.321) 0:00:14.551 **** 2025-09-27 22:12:38.510964 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.510973 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:12:38.510983 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:12:38.510992 | orchestrator | 2025-09-27 22:12:38.511002 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-27 22:12:38.511011 | orchestrator | Saturday 27 September 2025 22:10:43 +0000 (0:00:00.489) 0:00:15.041 **** 2025-09-27 22:12:38.511039 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3ef55d2f--0db9--555d--b1b6--fd7fdf57b491-osd--block--3ef55d2f--0db9--555d--b1b6--fd7fdf57b491', 'dm-uuid-LVM-wHBmOtcwELa8Z6sw5l1XCao88lHDe41j1vjTNJfV6eA0dA3MBFIkwsYpgurmYCLZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8d8c80c3--887a--53bd--bc85--16ee8bc68188-osd--block--8d8c80c3--887a--53bd--bc85--16ee8bc68188', 'dm-uuid-LVM-Rha9tU5yk0hzlXIngRcjwIXqtvE0oXBJ2HuQnvy3j6JJ85lH4xUIdHk0YgdHfOlZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124', 'scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part1', 'scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part14', 'scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part15', 'scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part16', 'scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:12:38.511193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3ef55d2f--0db9--555d--b1b6--fd7fdf57b491-osd--block--3ef55d2f--0db9--555d--b1b6--fd7fdf57b491'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7yqG3J-jVdW-2Whz-Ntob-bZFp-BAn1-lVFRGJ', 'scsi-0QEMU_QEMU_HARDDISK_d6e45664-99ef-4d09-8a38-5c0568f04129', 'scsi-SQEMU_QEMU_HARDDISK_d6e45664-99ef-4d09-8a38-5c0568f04129'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:12:38.511211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8d8c80c3--887a--53bd--bc85--16ee8bc68188-osd--block--8d8c80c3--887a--53bd--bc85--16ee8bc68188'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YQ81UZ-2mY5-F6Ev-t0Uq-ROnw-JQoC-30TuXd', 'scsi-0QEMU_QEMU_HARDDISK_02398e45-2b37-4a9b-beeb-c269fa72e24d', 'scsi-SQEMU_QEMU_HARDDISK_02398e45-2b37-4a9b-beeb-c269fa72e24d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:12:38.511222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7c2c329-81fb-49e1-8405-12e2c9115bb9', 'scsi-SQEMU_QEMU_HARDDISK_c7c2c329-81fb-49e1-8405-12e2c9115bb9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:12:38.511233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-21-18-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:12:38.511244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--be08f40e--52da--5801--960c--910a686d222b-osd--block--be08f40e--52da--5801--960c--910a686d222b', 'dm-uuid-LVM-wyBhBSAYl05TDUHUquGlYyz9dYJLjOi8A3BI4pNW5HEe1cGzhxxFEbLHgiOTasiV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a2801305--6ac8--5a65--9707--7cc055d05458-osd--block--a2801305--6ac8--5a65--9707--7cc055d05458', 'dm-uuid-LVM-2HTfx83siLmEzeaVRGpOqAiM8WDfbFLGb2wYLKlcQirWYYyx1SkVf6WXy5MRnHLp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511296 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.511310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511321 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511371 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511381 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2625e84f--b704--594b--a79a--2de5db7d7d7c-osd--block--2625e84f--b704--594b--a79a--2de5db7d7d7c', 'dm-uuid-LVM-tEJP5PbcSsSSbbDKu3GExl301ZWn60CibG2ckcvFkNhCVDl7QfWW2UexMu9MJeZA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43', 'scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part1', 'scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part14', 'scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part15', 'scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part16', 'scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:12:38.511417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--30a62591--9a6e--5933--8bc7--7c2bee7235f5-osd--block--30a62591--9a6e--5933--8bc7--7c2bee7235f5', 'dm-uuid-LVM-nDSvOLBW0ZRMe4W3sP2G9mky0pBp7fYUb3CoXsrUNg876FlEU3xKbreGgmq0VpHD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511433 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511444 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--be08f40e--52da--5801--960c--910a686d222b-osd--block--be08f40e--52da--5801--960c--910a686d222b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EjMDk3-q4ZY-L2iE-GwkP-CDIm-bejd-C2yAUX', 'scsi-0QEMU_QEMU_HARDDISK_f54ee983-9faf-4784-aff9-7d79079ed7ae', 'scsi-SQEMU_QEMU_HARDDISK_f54ee983-9faf-4784-aff9-7d79079ed7ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:12:38.511454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a2801305--6ac8--5a65--9707--7cc055d05458-osd--block--a2801305--6ac8--5a65--9707--7cc055d05458'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M8B58g-tRCs-RhC0-EWdg-esdz-7oMf-9To8tD', 'scsi-0QEMU_QEMU_HARDDISK_270d9e8b-cef6-4542-9e07-9deadafed901', 'scsi-SQEMU_QEMU_HARDDISK_270d9e8b-cef6-4542-9e07-9deadafed901'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:12:38.511499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'sch2025-09-27 22:12:38 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:12:38.511510 | orchestrator | eduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c98ed57-cbba-4a71-94c9-227184fafc60', 'scsi-SQEMU_QEMU_HARDDISK_5c98ed57-cbba-4a71-94c9-227184fafc60'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:12:38.511532 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-21-17-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:12:38.511558 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:12:38.511568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 22:12:38.511621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187', 'scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part1', 'scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part14', 'scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part15', 'scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part16', 'scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:12:38.511639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2625e84f--b704--594b--a79a--2de5db7d7d7c-osd--block--2625e84f--b704--594b--a79a--2de5db7d7d7c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-im9I43-NQpD-NkwB-N1DN-gEtA-HpXU-rdcCzv', 'scsi-0QEMU_QEMU_HARDDISK_c35b6dae-9fd6-477e-b9cb-11e140c89f55', 'scsi-SQEMU_QEMU_HARDDISK_c35b6dae-9fd6-477e-b9cb-11e140c89f55'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:12:38.511649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--30a62591--9a6e--5933--8bc7--7c2bee7235f5-osd--block--30a62591--9a6e--5933--8bc7--7c2bee7235f5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZrTht9-UIio-M6X3-0If5-OjbW-TIwq-RXDdvv', 'scsi-0QEMU_QEMU_HARDDISK_347ca9a0-83dc-4ac7-930f-213626cd3e96', 'scsi-SQEMU_QEMU_HARDDISK_347ca9a0-83dc-4ac7-930f-213626cd3e96'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:12:38.511664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ce21c34-3cf8-4892-a084-795bd672264f', 'scsi-SQEMU_QEMU_HARDDISK_6ce21c34-3cf8-4892-a084-795bd672264f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:12:38.511682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-21-17-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 22:12:38.511692 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:12:38.511702 | orchestrator | 2025-09-27 22:12:38.511711 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-27 22:12:38.511721 | orchestrator | Saturday 27 September 2025 22:10:44 +0000 (0:00:00.543) 0:00:15.585 **** 2025-09-27 22:12:38.511731 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3ef55d2f--0db9--555d--b1b6--fd7fdf57b491-osd--block--3ef55d2f--0db9--555d--b1b6--fd7fdf57b491', 'dm-uuid-LVM-wHBmOtcwELa8Z6sw5l1XCao88lHDe41j1vjTNJfV6eA0dA3MBFIkwsYpgurmYCLZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.511748 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8d8c80c3--887a--53bd--bc85--16ee8bc68188-osd--block--8d8c80c3--887a--53bd--bc85--16ee8bc68188', 'dm-uuid-LVM-Rha9tU5yk0hzlXIngRcjwIXqtvE0oXBJ2HuQnvy3j6JJ85lH4xUIdHk0YgdHfOlZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.511759 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.511769 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.511783 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.511801 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.511811 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.511827 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.511837 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.511847 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--be08f40e--52da--5801--960c--910a686d222b-osd--block--be08f40e--52da--5801--960c--910a686d222b', 'dm-uuid-LVM-wyBhBSAYl05TDUHUquGlYyz9dYJLjOi8A3BI4pNW5HEe1cGzhxxFEbLHgiOTasiV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.511862 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.511880 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a2801305--6ac8--5a65--9707--7cc055d05458-osd--block--a2801305--6ac8--5a65--9707--7cc055d05458', 'dm-uuid-LVM-2HTfx83siLmEzeaVRGpOqAiM8WDfbFLGb2wYLKlcQirWYYyx1SkVf6WXy5MRnHLp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.511897 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124', 'scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part1', 'scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part14', 'scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part15', 'scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part16', 'scsi-SQEMU_QEMU_HARDDISK_ee3e927d-3b64-40df-8c8e-1bd9928ca124-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.511908 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.511929 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3ef55d2f--0db9--555d--b1b6--fd7fdf57b491-osd--block--3ef55d2f--0db9--555d--b1b6--fd7fdf57b491'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7yqG3J-jVdW-2Whz-Ntob-bZFp-BAn1-lVFRGJ', 'scsi-0QEMU_QEMU_HARDDISK_d6e45664-99ef-4d09-8a38-5c0568f04129', 'scsi-SQEMU_QEMU_HARDDISK_d6e45664-99ef-4d09-8a38-5c0568f04129'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.511941 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8d8c80c3--887a--53bd--bc85--16ee8bc68188-osd--block--8d8c80c3--887a--53bd--bc85--16ee8bc68188'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YQ81UZ-2mY5-F6Ev-t0Uq-ROnw-JQoC-30TuXd', 'scsi-0QEMU_QEMU_HARDDISK_02398e45-2b37-4a9b-beeb-c269fa72e24d', 'scsi-SQEMU_QEMU_HARDDISK_02398e45-2b37-4a9b-beeb-c269fa72e24d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.511957 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.511968 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7c2c329-81fb-49e1-8405-12e2c9115bb9', 'scsi-SQEMU_QEMU_HARDDISK_c7c2c329-81fb-49e1-8405-12e2c9115bb9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.511978 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.511995 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-21-18-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512027 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512045 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512055 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.512065 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512075 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512085 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512109 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43', 'scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part1', 'scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part14', 'scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part15', 'scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part16', 'scsi-SQEMU_QEMU_HARDDISK_bc8917b5-a0fa-41d5-ac0d-ebfc7a91ad43-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512127 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--be08f40e--52da--5801--960c--910a686d222b-osd--block--be08f40e--52da--5801--960c--910a686d222b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EjMDk3-q4ZY-L2iE-GwkP-CDIm-bejd-C2yAUX', 'scsi-0QEMU_QEMU_HARDDISK_f54ee983-9faf-4784-aff9-7d79079ed7ae', 'scsi-SQEMU_QEMU_HARDDISK_f54ee983-9faf-4784-aff9-7d79079ed7ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512137 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a2801305--6ac8--5a65--9707--7cc055d05458-osd--block--a2801305--6ac8--5a65--9707--7cc055d05458'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M8B58g-tRCs-RhC0-EWdg-esdz-7oMf-9To8tD', 'scsi-0QEMU_QEMU_HARDDISK_270d9e8b-cef6-4542-9e07-9deadafed901', 'scsi-SQEMU_QEMU_HARDDISK_270d9e8b-cef6-4542-9e07-9deadafed901'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512152 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c98ed57-cbba-4a71-94c9-227184fafc60', 'scsi-SQEMU_QEMU_HARDDISK_5c98ed57-cbba-4a71-94c9-227184fafc60'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512170 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-21-17-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512186 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:12:38.512196 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2625e84f--b704--594b--a79a--2de5db7d7d7c-osd--block--2625e84f--b704--594b--a79a--2de5db7d7d7c', 'dm-uuid-LVM-tEJP5PbcSsSSbbDKu3GExl301ZWn60CibG2ckcvFkNhCVDl7QfWW2UexMu9MJeZA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512206 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--30a62591--9a6e--5933--8bc7--7c2bee7235f5-osd--block--30a62591--9a6e--5933--8bc7--7c2bee7235f5', 'dm-uuid-LVM-nDSvOLBW0ZRMe4W3sP2G9mky0pBp7fYUb3CoXsrUNg876FlEU3xKbreGgmq0VpHD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512216 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512226 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512240 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512263 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512274 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512284 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512294 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512304 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512326 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187', 'scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part1', 'scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part14', 'scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part15', 'scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part16', 'scsi-SQEMU_QEMU_HARDDISK_15263243-d7d0-418e-bcde-dca37b998187-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512343 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2625e84f--b704--594b--a79a--2de5db7d7d7c-osd--block--2625e84f--b704--594b--a79a--2de5db7d7d7c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-im9I43-NQpD-NkwB-N1DN-gEtA-HpXU-rdcCzv', 'scsi-0QEMU_QEMU_HARDDISK_c35b6dae-9fd6-477e-b9cb-11e140c89f55', 'scsi-SQEMU_QEMU_HARDDISK_c35b6dae-9fd6-477e-b9cb-11e140c89f55'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512354 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--30a62591--9a6e--5933--8bc7--7c2bee7235f5-osd--block--30a62591--9a6e--5933--8bc7--7c2bee7235f5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZrTht9-UIio-M6X3-0If5-OjbW-TIwq-RXDdvv', 'scsi-0QEMU_QEMU_HARDDISK_347ca9a0-83dc-4ac7-930f-213626cd3e96', 'scsi-SQEMU_QEMU_HARDDISK_347ca9a0-83dc-4ac7-930f-213626cd3e96'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512424 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ce21c34-3cf8-4892-a084-795bd672264f', 'scsi-SQEMU_QEMU_HARDDISK_6ce21c34-3cf8-4892-a084-795bd672264f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512443 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-21-17-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 22:12:38.512453 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:12:38.512463 | orchestrator | 2025-09-27 22:12:38.512473 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-27 22:12:38.512483 | orchestrator | Saturday 27 September 2025 22:10:45 +0000 (0:00:00.618) 0:00:16.204 **** 2025-09-27 22:12:38.512492 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:12:38.512502 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:12:38.512512 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:12:38.512521 | orchestrator | 2025-09-27 22:12:38.512530 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-27 22:12:38.512540 | orchestrator | Saturday 27 September 2025 22:10:45 +0000 (0:00:00.715) 0:00:16.919 **** 2025-09-27 22:12:38.512549 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:12:38.512559 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:12:38.512568 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:12:38.512578 | orchestrator | 2025-09-27 22:12:38.512587 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-27 22:12:38.512597 | orchestrator | Saturday 27 September 2025 22:10:46 +0000 (0:00:00.460) 0:00:17.380 **** 2025-09-27 22:12:38.512607 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:12:38.512616 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:12:38.512625 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:12:38.512635 | orchestrator | 2025-09-27 22:12:38.512644 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-27 22:12:38.512654 | orchestrator | Saturday 27 September 2025 22:10:46 +0000 (0:00:00.670) 0:00:18.050 **** 2025-09-27 22:12:38.512664 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.512673 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:12:38.512683 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:12:38.512692 | orchestrator | 2025-09-27 22:12:38.512702 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-27 22:12:38.512711 | orchestrator | Saturday 27 September 2025 22:10:47 +0000 (0:00:00.314) 0:00:18.365 **** 2025-09-27 22:12:38.512721 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.512730 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:12:38.512740 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:12:38.512749 | orchestrator | 2025-09-27 22:12:38.512758 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-27 22:12:38.512768 | orchestrator | Saturday 27 September 2025 22:10:47 +0000 (0:00:00.426) 0:00:18.791 **** 2025-09-27 22:12:38.512777 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.512787 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:12:38.512796 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:12:38.512806 | orchestrator | 2025-09-27 22:12:38.512815 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-27 22:12:38.512825 | orchestrator | Saturday 27 September 2025 22:10:48 +0000 (0:00:00.511) 0:00:19.303 **** 2025-09-27 22:12:38.512840 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-27 22:12:38.512850 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-27 22:12:38.512859 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-27 22:12:38.512869 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-27 22:12:38.512878 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-27 22:12:38.512888 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-27 22:12:38.512897 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-27 22:12:38.512906 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-27 22:12:38.512916 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-27 22:12:38.512925 | orchestrator | 2025-09-27 22:12:38.512935 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-27 22:12:38.512945 | orchestrator | Saturday 27 September 2025 22:10:49 +0000 (0:00:00.835) 0:00:20.139 **** 2025-09-27 22:12:38.512954 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-27 22:12:38.512964 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-27 22:12:38.512973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-27 22:12:38.512983 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.512992 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-27 22:12:38.513002 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-27 22:12:38.513012 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-27 22:12:38.513037 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:12:38.513046 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-27 22:12:38.513056 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-27 22:12:38.513066 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-27 22:12:38.513086 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:12:38.513096 | orchestrator | 2025-09-27 22:12:38.513106 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-27 22:12:38.513115 | orchestrator | Saturday 27 September 2025 22:10:49 +0000 (0:00:00.337) 0:00:20.476 **** 2025-09-27 22:12:38.513125 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:12:38.513135 | orchestrator | 2025-09-27 22:12:38.513145 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-27 22:12:38.513154 | orchestrator | Saturday 27 September 2025 22:10:50 +0000 (0:00:00.700) 0:00:21.177 **** 2025-09-27 22:12:38.513164 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.513173 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:12:38.513183 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:12:38.513192 | orchestrator | 2025-09-27 22:12:38.513202 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-27 22:12:38.513211 | orchestrator | Saturday 27 September 2025 22:10:50 +0000 (0:00:00.369) 0:00:21.546 **** 2025-09-27 22:12:38.513221 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.513230 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:12:38.513240 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:12:38.513249 | orchestrator | 2025-09-27 22:12:38.513259 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-27 22:12:38.513268 | orchestrator | Saturday 27 September 2025 22:10:50 +0000 (0:00:00.325) 0:00:21.871 **** 2025-09-27 22:12:38.513278 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.513287 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:12:38.513296 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:12:38.513306 | orchestrator | 2025-09-27 22:12:38.513315 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-27 22:12:38.513325 | orchestrator | Saturday 27 September 2025 22:10:51 +0000 (0:00:00.346) 0:00:22.218 **** 2025-09-27 22:12:38.513340 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:12:38.513350 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:12:38.513360 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:12:38.513369 | orchestrator | 2025-09-27 22:12:38.513379 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-27 22:12:38.513388 | orchestrator | Saturday 27 September 2025 22:10:51 +0000 (0:00:00.681) 0:00:22.900 **** 2025-09-27 22:12:38.513398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 22:12:38.513408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 22:12:38.513417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 22:12:38.513427 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.513436 | orchestrator | 2025-09-27 22:12:38.513445 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-27 22:12:38.513455 | orchestrator | Saturday 27 September 2025 22:10:52 +0000 (0:00:00.383) 0:00:23.283 **** 2025-09-27 22:12:38.513464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 22:12:38.513474 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 22:12:38.513484 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 22:12:38.513493 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.513502 | orchestrator | 2025-09-27 22:12:38.513512 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-27 22:12:38.513521 | orchestrator | Saturday 27 September 2025 22:10:52 +0000 (0:00:00.371) 0:00:23.655 **** 2025-09-27 22:12:38.513531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 22:12:38.513541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 22:12:38.513550 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 22:12:38.513559 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.513569 | orchestrator | 2025-09-27 22:12:38.513578 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-27 22:12:38.513588 | orchestrator | Saturday 27 September 2025 22:10:52 +0000 (0:00:00.370) 0:00:24.025 **** 2025-09-27 22:12:38.513597 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:12:38.513607 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:12:38.513616 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:12:38.513625 | orchestrator | 2025-09-27 22:12:38.513635 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-27 22:12:38.513644 | orchestrator | Saturday 27 September 2025 22:10:53 +0000 (0:00:00.333) 0:00:24.359 **** 2025-09-27 22:12:38.513654 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-27 22:12:38.513663 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-27 22:12:38.513672 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-27 22:12:38.513682 | orchestrator | 2025-09-27 22:12:38.513691 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-27 22:12:38.513701 | orchestrator | Saturday 27 September 2025 22:10:53 +0000 (0:00:00.493) 0:00:24.853 **** 2025-09-27 22:12:38.513710 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-27 22:12:38.513719 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-27 22:12:38.513729 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-27 22:12:38.513738 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-27 22:12:38.513748 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-27 22:12:38.513757 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-27 22:12:38.513767 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-27 22:12:38.513776 | orchestrator | 2025-09-27 22:12:38.513786 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-27 22:12:38.513812 | orchestrator | Saturday 27 September 2025 22:10:54 +0000 (0:00:01.006) 0:00:25.859 **** 2025-09-27 22:12:38.513823 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-27 22:12:38.513832 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-27 22:12:38.513842 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-27 22:12:38.513851 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-27 22:12:38.513861 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-27 22:12:38.513870 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-27 22:12:38.513880 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-27 22:12:38.513889 | orchestrator | 2025-09-27 22:12:38.513899 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-27 22:12:38.513909 | orchestrator | Saturday 27 September 2025 22:10:56 +0000 (0:00:01.974) 0:00:27.833 **** 2025-09-27 22:12:38.513918 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:12:38.513928 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:12:38.513937 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-27 22:12:38.513946 | orchestrator | 2025-09-27 22:12:38.513956 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-27 22:12:38.513965 | orchestrator | Saturday 27 September 2025 22:10:57 +0000 (0:00:00.393) 0:00:28.227 **** 2025-09-27 22:12:38.513977 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-27 22:12:38.513987 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-27 22:12:38.513997 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-27 22:12:38.514007 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-27 22:12:38.514070 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-27 22:12:38.514081 | orchestrator | 2025-09-27 22:12:38.514091 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-27 22:12:38.514101 | orchestrator | Saturday 27 September 2025 22:11:43 +0000 (0:00:46.147) 0:01:14.375 **** 2025-09-27 22:12:38.514110 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:12:38.514120 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:12:38.514130 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:12:38.514139 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:12:38.514149 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:12:38.514165 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:12:38.514175 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-27 22:12:38.514184 | orchestrator | 2025-09-27 22:12:38.514194 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-27 22:12:38.514203 | orchestrator | Saturday 27 September 2025 22:12:07 +0000 (0:00:23.997) 0:01:38.373 **** 2025-09-27 22:12:38.514213 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:12:38.514223 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:12:38.514232 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:12:38.514242 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:12:38.514251 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:12:38.514261 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:12:38.514271 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-27 22:12:38.514280 | orchestrator | 2025-09-27 22:12:38.514300 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-27 22:12:38.514310 | orchestrator | Saturday 27 September 2025 22:12:19 +0000 (0:00:12.036) 0:01:50.409 **** 2025-09-27 22:12:38.514320 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:12:38.514329 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-27 22:12:38.514339 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-27 22:12:38.514349 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:12:38.514358 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-27 22:12:38.514368 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-27 22:12:38.514377 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:12:38.514387 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-27 22:12:38.514396 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-27 22:12:38.514406 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:12:38.514415 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-27 22:12:38.514425 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-27 22:12:38.514434 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:12:38.514444 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-27 22:12:38.514453 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-27 22:12:38.514463 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 22:12:38.514472 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-27 22:12:38.514482 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-27 22:12:38.514491 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-27 22:12:38.514501 | orchestrator | 2025-09-27 22:12:38.514511 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:12:38.514521 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-27 22:12:38.514531 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-27 22:12:38.514547 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-27 22:12:38.514556 | orchestrator | 2025-09-27 22:12:38.514566 | orchestrator | 2025-09-27 22:12:38.514575 | orchestrator | 2025-09-27 22:12:38.514585 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:12:38.514594 | orchestrator | Saturday 27 September 2025 22:12:36 +0000 (0:00:17.075) 0:02:07.485 **** 2025-09-27 22:12:38.514604 | orchestrator | =============================================================================== 2025-09-27 22:12:38.514614 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.15s 2025-09-27 22:12:38.514623 | orchestrator | generate keys ---------------------------------------------------------- 24.00s 2025-09-27 22:12:38.514633 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.08s 2025-09-27 22:12:38.514642 | orchestrator | get keys from monitors ------------------------------------------------- 12.04s 2025-09-27 22:12:38.514652 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.06s 2025-09-27 22:12:38.514661 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.97s 2025-09-27 22:12:38.514671 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.71s 2025-09-27 22:12:38.514680 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.01s 2025-09-27 22:12:38.514690 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.84s 2025-09-27 22:12:38.514699 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.79s 2025-09-27 22:12:38.514709 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.72s 2025-09-27 22:12:38.514719 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.70s 2025-09-27 22:12:38.514728 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.68s 2025-09-27 22:12:38.514738 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.68s 2025-09-27 22:12:38.514747 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.67s 2025-09-27 22:12:38.514757 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.67s 2025-09-27 22:12:38.514766 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.66s 2025-09-27 22:12:38.514776 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.62s 2025-09-27 22:12:38.514785 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.54s 2025-09-27 22:12:38.514795 | orchestrator | ceph-facts : Set osd_pool_default_crush_rule fact ----------------------- 0.51s 2025-09-27 22:12:41.551380 | orchestrator | 2025-09-27 22:12:41 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:12:41.551535 | orchestrator | 2025-09-27 22:12:41 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:12:41.553291 | orchestrator | 2025-09-27 22:12:41 | INFO  | Task a683dec2-778f-4059-96d9-f9577d682d0d is in state STARTED 2025-09-27 22:12:41.553352 | orchestrator | 2025-09-27 22:12:41 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:12:44.592465 | orchestrator | 2025-09-27 22:12:44 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:12:44.593592 | orchestrator | 2025-09-27 22:12:44 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:12:44.594993 | orchestrator | 2025-09-27 22:12:44 | INFO  | Task a683dec2-778f-4059-96d9-f9577d682d0d is in state STARTED 2025-09-27 22:12:44.595113 | orchestrator | 2025-09-27 22:12:44 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:12:47.635686 | orchestrator | 2025-09-27 22:12:47 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:12:47.637222 | orchestrator | 2025-09-27 22:12:47 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:12:47.638613 | orchestrator | 2025-09-27 22:12:47 | INFO  | Task a683dec2-778f-4059-96d9-f9577d682d0d is in state STARTED 2025-09-27 22:12:47.638772 | orchestrator | 2025-09-27 22:12:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:12:50.680679 | orchestrator | 2025-09-27 22:12:50 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:12:50.683919 | orchestrator | 2025-09-27 22:12:50 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:12:50.686065 | orchestrator | 2025-09-27 22:12:50 | INFO  | Task a683dec2-778f-4059-96d9-f9577d682d0d is in state STARTED 2025-09-27 22:12:50.686133 | orchestrator | 2025-09-27 22:12:50 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:12:53.731722 | orchestrator | 2025-09-27 22:12:53 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:12:53.733818 | orchestrator | 2025-09-27 22:12:53 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:12:53.736464 | orchestrator | 2025-09-27 22:12:53 | INFO  | Task a683dec2-778f-4059-96d9-f9577d682d0d is in state STARTED 2025-09-27 22:12:53.736827 | orchestrator | 2025-09-27 22:12:53 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:12:56.784579 | orchestrator | 2025-09-27 22:12:56 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:12:56.785996 | orchestrator | 2025-09-27 22:12:56 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:12:56.787462 | orchestrator | 2025-09-27 22:12:56 | INFO  | Task a683dec2-778f-4059-96d9-f9577d682d0d is in state STARTED 2025-09-27 22:12:56.787500 | orchestrator | 2025-09-27 22:12:56 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:12:59.837885 | orchestrator | 2025-09-27 22:12:59 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:12:59.840422 | orchestrator | 2025-09-27 22:12:59 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:12:59.843195 | orchestrator | 2025-09-27 22:12:59 | INFO  | Task a683dec2-778f-4059-96d9-f9577d682d0d is in state STARTED 2025-09-27 22:12:59.843250 | orchestrator | 2025-09-27 22:12:59 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:13:02.878674 | orchestrator | 2025-09-27 22:13:02 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:13:02.880593 | orchestrator | 2025-09-27 22:13:02 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:13:02.882824 | orchestrator | 2025-09-27 22:13:02 | INFO  | Task a683dec2-778f-4059-96d9-f9577d682d0d is in state STARTED 2025-09-27 22:13:02.882972 | orchestrator | 2025-09-27 22:13:02 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:13:05.928949 | orchestrator | 2025-09-27 22:13:05 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:13:05.930793 | orchestrator | 2025-09-27 22:13:05 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:13:05.932388 | orchestrator | 2025-09-27 22:13:05 | INFO  | Task a683dec2-778f-4059-96d9-f9577d682d0d is in state STARTED 2025-09-27 22:13:05.932451 | orchestrator | 2025-09-27 22:13:05 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:13:08.979440 | orchestrator | 2025-09-27 22:13:08 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:13:08.981317 | orchestrator | 2025-09-27 22:13:08 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:13:08.983357 | orchestrator | 2025-09-27 22:13:08 | INFO  | Task a683dec2-778f-4059-96d9-f9577d682d0d is in state STARTED 2025-09-27 22:13:08.983400 | orchestrator | 2025-09-27 22:13:08 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:13:12.034920 | orchestrator | 2025-09-27 22:13:12 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:13:12.035930 | orchestrator | 2025-09-27 22:13:12 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:13:12.038201 | orchestrator | 2025-09-27 22:13:12 | INFO  | Task a683dec2-778f-4059-96d9-f9577d682d0d is in state STARTED 2025-09-27 22:13:12.038329 | orchestrator | 2025-09-27 22:13:12 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:13:15.080921 | orchestrator | 2025-09-27 22:13:15 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:13:15.081584 | orchestrator | 2025-09-27 22:13:15 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:13:15.083051 | orchestrator | 2025-09-27 22:13:15 | INFO  | Task a683dec2-778f-4059-96d9-f9577d682d0d is in state SUCCESS 2025-09-27 22:13:15.084309 | orchestrator | 2025-09-27 22:13:15 | INFO  | Task 49cd45d0-f7a7-454c-a1d4-5dc9a9a6156a is in state STARTED 2025-09-27 22:13:15.084340 | orchestrator | 2025-09-27 22:13:15 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:13:18.135479 | orchestrator | 2025-09-27 22:13:18 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:13:18.137062 | orchestrator | 2025-09-27 22:13:18 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:13:18.138772 | orchestrator | 2025-09-27 22:13:18 | INFO  | Task 49cd45d0-f7a7-454c-a1d4-5dc9a9a6156a is in state STARTED 2025-09-27 22:13:18.138848 | orchestrator | 2025-09-27 22:13:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:13:21.180630 | orchestrator | 2025-09-27 22:13:21 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:13:21.182465 | orchestrator | 2025-09-27 22:13:21 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:13:21.184313 | orchestrator | 2025-09-27 22:13:21 | INFO  | Task 49cd45d0-f7a7-454c-a1d4-5dc9a9a6156a is in state STARTED 2025-09-27 22:13:21.184370 | orchestrator | 2025-09-27 22:13:21 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:13:24.239728 | orchestrator | 2025-09-27 22:13:24 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:13:24.240262 | orchestrator | 2025-09-27 22:13:24 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:13:24.241857 | orchestrator | 2025-09-27 22:13:24 | INFO  | Task 49cd45d0-f7a7-454c-a1d4-5dc9a9a6156a is in state STARTED 2025-09-27 22:13:24.241905 | orchestrator | 2025-09-27 22:13:24 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:13:27.281209 | orchestrator | 2025-09-27 22:13:27 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:13:27.284079 | orchestrator | 2025-09-27 22:13:27 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:13:27.285683 | orchestrator | 2025-09-27 22:13:27 | INFO  | Task 49cd45d0-f7a7-454c-a1d4-5dc9a9a6156a is in state STARTED 2025-09-27 22:13:27.285717 | orchestrator | 2025-09-27 22:13:27 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:13:30.315757 | orchestrator | 2025-09-27 22:13:30 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:13:30.317917 | orchestrator | 2025-09-27 22:13:30 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:13:30.319964 | orchestrator | 2025-09-27 22:13:30 | INFO  | Task 49cd45d0-f7a7-454c-a1d4-5dc9a9a6156a is in state STARTED 2025-09-27 22:13:30.321415 | orchestrator | 2025-09-27 22:13:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:13:33.376066 | orchestrator | 2025-09-27 22:13:33 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:13:33.376229 | orchestrator | 2025-09-27 22:13:33 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:13:33.378170 | orchestrator | 2025-09-27 22:13:33 | INFO  | Task 49cd45d0-f7a7-454c-a1d4-5dc9a9a6156a is in state STARTED 2025-09-27 22:13:33.378204 | orchestrator | 2025-09-27 22:13:33 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:13:36.414334 | orchestrator | 2025-09-27 22:13:36 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:13:36.414730 | orchestrator | 2025-09-27 22:13:36 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state STARTED 2025-09-27 22:13:36.415567 | orchestrator | 2025-09-27 22:13:36 | INFO  | Task 49cd45d0-f7a7-454c-a1d4-5dc9a9a6156a is in state STARTED 2025-09-27 22:13:36.415644 | orchestrator | 2025-09-27 22:13:36 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:13:39.448249 | orchestrator | 2025-09-27 22:13:39 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:13:39.450717 | orchestrator | 2025-09-27 22:13:39 | INFO  | Task b6f23ada-ffb1-49d0-92dc-6c60d67c2417 is in state SUCCESS 2025-09-27 22:13:39.452318 | orchestrator | 2025-09-27 22:13:39.452359 | orchestrator | 2025-09-27 22:13:39.452371 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-27 22:13:39.452383 | orchestrator | 2025-09-27 22:13:39.452394 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2025-09-27 22:13:39.452405 | orchestrator | Saturday 27 September 2025 22:12:40 +0000 (0:00:00.162) 0:00:00.162 **** 2025-09-27 22:13:39.452416 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-27 22:13:39.452428 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-27 22:13:39.452500 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-27 22:13:39.452513 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-27 22:13:39.452588 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-27 22:13:39.452600 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-27 22:13:39.452611 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-27 22:13:39.452621 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-27 22:13:39.452632 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-27 22:13:39.452643 | orchestrator | 2025-09-27 22:13:39.452654 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-27 22:13:39.452665 | orchestrator | Saturday 27 September 2025 22:12:45 +0000 (0:00:04.763) 0:00:04.925 **** 2025-09-27 22:13:39.452675 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-27 22:13:39.452686 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-27 22:13:39.452697 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-27 22:13:39.452732 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-27 22:13:39.452743 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-27 22:13:39.452754 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-27 22:13:39.452764 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-27 22:13:39.452775 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-27 22:13:39.452785 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-27 22:13:39.452796 | orchestrator | 2025-09-27 22:13:39.452807 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-27 22:13:39.452818 | orchestrator | Saturday 27 September 2025 22:12:49 +0000 (0:00:04.210) 0:00:09.136 **** 2025-09-27 22:13:39.452829 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-27 22:13:39.452840 | orchestrator | 2025-09-27 22:13:39.452851 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-27 22:13:39.452862 | orchestrator | Saturday 27 September 2025 22:12:50 +0000 (0:00:00.989) 0:00:10.125 **** 2025-09-27 22:13:39.452872 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-27 22:13:39.452883 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-27 22:13:39.452893 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-27 22:13:39.452905 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-27 22:13:39.452917 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-27 22:13:39.452929 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-27 22:13:39.452941 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-27 22:13:39.452967 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-27 22:13:39.453009 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-27 22:13:39.453020 | orchestrator | 2025-09-27 22:13:39.453031 | orchestrator | TASK [Check if target directories exist] *************************************** 2025-09-27 22:13:39.453042 | orchestrator | Saturday 27 September 2025 22:13:02 +0000 (0:00:12.234) 0:00:22.360 **** 2025-09-27 22:13:39.453052 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2025-09-27 22:13:39.453063 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2025-09-27 22:13:39.453074 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-09-27 22:13:39.453084 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-09-27 22:13:39.453108 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-09-27 22:13:39.453120 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-09-27 22:13:39.453130 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2025-09-27 22:13:39.453141 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2025-09-27 22:13:39.453152 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2025-09-27 22:13:39.453162 | orchestrator | 2025-09-27 22:13:39.453173 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-27 22:13:39.453183 | orchestrator | Saturday 27 September 2025 22:13:06 +0000 (0:00:03.972) 0:00:26.333 **** 2025-09-27 22:13:39.453202 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-27 22:13:39.453213 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-27 22:13:39.453224 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-27 22:13:39.453234 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-27 22:13:39.453245 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-27 22:13:39.453255 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-27 22:13:39.453266 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-27 22:13:39.453276 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-27 22:13:39.453286 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-27 22:13:39.453297 | orchestrator | 2025-09-27 22:13:39.453307 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:13:39.453318 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:13:39.453329 | orchestrator | 2025-09-27 22:13:39.453340 | orchestrator | 2025-09-27 22:13:39.453350 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:13:39.453361 | orchestrator | Saturday 27 September 2025 22:13:13 +0000 (0:00:06.439) 0:00:32.772 **** 2025-09-27 22:13:39.453371 | orchestrator | =============================================================================== 2025-09-27 22:13:39.453383 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.23s 2025-09-27 22:13:39.453393 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.44s 2025-09-27 22:13:39.453404 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.76s 2025-09-27 22:13:39.453415 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.21s 2025-09-27 22:13:39.453425 | orchestrator | Check if target directories exist --------------------------------------- 3.97s 2025-09-27 22:13:39.453436 | orchestrator | Create share directory -------------------------------------------------- 0.99s 2025-09-27 22:13:39.453446 | orchestrator | 2025-09-27 22:13:39.453457 | orchestrator | 2025-09-27 22:13:39.453467 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:13:39.453478 | orchestrator | 2025-09-27 22:13:39.453488 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:13:39.453499 | orchestrator | Saturday 27 September 2025 22:11:52 +0000 (0:00:00.268) 0:00:00.268 **** 2025-09-27 22:13:39.453510 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:13:39.453521 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:13:39.453531 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:13:39.453542 | orchestrator | 2025-09-27 22:13:39.453552 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:13:39.453563 | orchestrator | Saturday 27 September 2025 22:11:52 +0000 (0:00:00.281) 0:00:00.549 **** 2025-09-27 22:13:39.453573 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-27 22:13:39.453584 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-27 22:13:39.453595 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-27 22:13:39.453605 | orchestrator | 2025-09-27 22:13:39.453616 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-27 22:13:39.453626 | orchestrator | 2025-09-27 22:13:39.453637 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-27 22:13:39.453648 | orchestrator | Saturday 27 September 2025 22:11:52 +0000 (0:00:00.396) 0:00:00.946 **** 2025-09-27 22:13:39.453664 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:13:39.453675 | orchestrator | 2025-09-27 22:13:39.453685 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-27 22:13:39.453702 | orchestrator | Saturday 27 September 2025 22:11:53 +0000 (0:00:00.474) 0:00:01.421 **** 2025-09-27 22:13:39.453730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 22:13:39.453753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 22:13:39.453782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 22:13:39.453795 | orchestrator | 2025-09-27 22:13:39.453806 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-27 22:13:39.453817 | orchestrator | Saturday 27 September 2025 22:11:54 +0000 (0:00:01.189) 0:00:02.610 **** 2025-09-27 22:13:39.453828 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:13:39.453839 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:13:39.453850 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:13:39.453860 | orchestrator | 2025-09-27 22:13:39.453871 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-27 22:13:39.453881 | orchestrator | Saturday 27 September 2025 22:11:54 +0000 (0:00:00.419) 0:00:03.030 **** 2025-09-27 22:13:39.453892 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-27 22:13:39.453902 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-27 22:13:39.453913 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-27 22:13:39.453923 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-27 22:13:39.453934 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-27 22:13:39.453944 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-27 22:13:39.453955 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-27 22:13:39.453971 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-27 22:13:39.453999 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-27 22:13:39.454009 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-27 22:13:39.454073 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-27 22:13:39.454085 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-27 22:13:39.454101 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-27 22:13:39.454112 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-27 22:13:39.454123 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-27 22:13:39.454133 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-27 22:13:39.454144 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-27 22:13:39.454154 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-27 22:13:39.454165 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-27 22:13:39.454175 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-27 22:13:39.454185 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-27 22:13:39.454196 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-27 22:13:39.454214 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-27 22:13:39.454226 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-27 22:13:39.454237 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-27 22:13:39.454249 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-27 22:13:39.454260 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-27 22:13:39.454271 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-27 22:13:39.454282 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-27 22:13:39.454292 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-27 22:13:39.454303 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-27 22:13:39.454313 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-27 22:13:39.454324 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-27 22:13:39.454335 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-27 22:13:39.454345 | orchestrator | 2025-09-27 22:13:39.454356 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-27 22:13:39.454367 | orchestrator | Saturday 27 September 2025 22:11:55 +0000 (0:00:00.732) 0:00:03.762 **** 2025-09-27 22:13:39.454385 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:13:39.454395 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:13:39.454406 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:13:39.454417 | orchestrator | 2025-09-27 22:13:39.454427 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-27 22:13:39.454438 | orchestrator | Saturday 27 September 2025 22:11:55 +0000 (0:00:00.318) 0:00:04.081 **** 2025-09-27 22:13:39.454449 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.454460 | orchestrator | 2025-09-27 22:13:39.454471 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-27 22:13:39.454481 | orchestrator | Saturday 27 September 2025 22:11:56 +0000 (0:00:00.148) 0:00:04.229 **** 2025-09-27 22:13:39.454492 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.454502 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:13:39.454513 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:13:39.454524 | orchestrator | 2025-09-27 22:13:39.454534 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-27 22:13:39.454545 | orchestrator | Saturday 27 September 2025 22:11:56 +0000 (0:00:00.468) 0:00:04.697 **** 2025-09-27 22:13:39.454556 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:13:39.454566 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:13:39.454577 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:13:39.454587 | orchestrator | 2025-09-27 22:13:39.454598 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-27 22:13:39.454608 | orchestrator | Saturday 27 September 2025 22:11:56 +0000 (0:00:00.303) 0:00:05.001 **** 2025-09-27 22:13:39.454619 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.454630 | orchestrator | 2025-09-27 22:13:39.454640 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-27 22:13:39.454650 | orchestrator | Saturday 27 September 2025 22:11:57 +0000 (0:00:00.138) 0:00:05.140 **** 2025-09-27 22:13:39.454661 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.454672 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:13:39.454682 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:13:39.454693 | orchestrator | 2025-09-27 22:13:39.454708 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-27 22:13:39.454719 | orchestrator | Saturday 27 September 2025 22:11:57 +0000 (0:00:00.275) 0:00:05.416 **** 2025-09-27 22:13:39.454729 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:13:39.454740 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:13:39.454750 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:13:39.454761 | orchestrator | 2025-09-27 22:13:39.454772 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-27 22:13:39.454782 | orchestrator | Saturday 27 September 2025 22:11:57 +0000 (0:00:00.268) 0:00:05.684 **** 2025-09-27 22:13:39.454793 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.454803 | orchestrator | 2025-09-27 22:13:39.454814 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-27 22:13:39.454824 | orchestrator | Saturday 27 September 2025 22:11:57 +0000 (0:00:00.122) 0:00:05.806 **** 2025-09-27 22:13:39.454835 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.454845 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:13:39.454856 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:13:39.454866 | orchestrator | 2025-09-27 22:13:39.454877 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-27 22:13:39.454892 | orchestrator | Saturday 27 September 2025 22:11:58 +0000 (0:00:00.508) 0:00:06.315 **** 2025-09-27 22:13:39.454903 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:13:39.454914 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:13:39.454925 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:13:39.454936 | orchestrator | 2025-09-27 22:13:39.454947 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-27 22:13:39.454957 | orchestrator | Saturday 27 September 2025 22:11:58 +0000 (0:00:00.292) 0:00:06.607 **** 2025-09-27 22:13:39.455024 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.455038 | orchestrator | 2025-09-27 22:13:39.455049 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-27 22:13:39.455059 | orchestrator | Saturday 27 September 2025 22:11:58 +0000 (0:00:00.135) 0:00:06.743 **** 2025-09-27 22:13:39.455070 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.455080 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:13:39.455091 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:13:39.455101 | orchestrator | 2025-09-27 22:13:39.455112 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-27 22:13:39.455123 | orchestrator | Saturday 27 September 2025 22:11:58 +0000 (0:00:00.299) 0:00:07.042 **** 2025-09-27 22:13:39.455133 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:13:39.455144 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:13:39.455154 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:13:39.455165 | orchestrator | 2025-09-27 22:13:39.455176 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-27 22:13:39.455186 | orchestrator | Saturday 27 September 2025 22:11:59 +0000 (0:00:00.304) 0:00:07.347 **** 2025-09-27 22:13:39.455197 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.455207 | orchestrator | 2025-09-27 22:13:39.455218 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-27 22:13:39.455229 | orchestrator | Saturday 27 September 2025 22:11:59 +0000 (0:00:00.335) 0:00:07.682 **** 2025-09-27 22:13:39.455239 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.455250 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:13:39.455260 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:13:39.455271 | orchestrator | 2025-09-27 22:13:39.455281 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-27 22:13:39.455292 | orchestrator | Saturday 27 September 2025 22:11:59 +0000 (0:00:00.285) 0:00:07.968 **** 2025-09-27 22:13:39.455303 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:13:39.455313 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:13:39.455324 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:13:39.455334 | orchestrator | 2025-09-27 22:13:39.455345 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-27 22:13:39.455355 | orchestrator | Saturday 27 September 2025 22:12:00 +0000 (0:00:00.311) 0:00:08.279 **** 2025-09-27 22:13:39.455366 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.455376 | orchestrator | 2025-09-27 22:13:39.455385 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-27 22:13:39.455395 | orchestrator | Saturday 27 September 2025 22:12:00 +0000 (0:00:00.137) 0:00:08.417 **** 2025-09-27 22:13:39.455405 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.455414 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:13:39.455424 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:13:39.455433 | orchestrator | 2025-09-27 22:13:39.455442 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-27 22:13:39.455452 | orchestrator | Saturday 27 September 2025 22:12:00 +0000 (0:00:00.284) 0:00:08.702 **** 2025-09-27 22:13:39.455461 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:13:39.455471 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:13:39.455480 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:13:39.455490 | orchestrator | 2025-09-27 22:13:39.455499 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-27 22:13:39.455509 | orchestrator | Saturday 27 September 2025 22:12:01 +0000 (0:00:00.487) 0:00:09.190 **** 2025-09-27 22:13:39.455527 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.455545 | orchestrator | 2025-09-27 22:13:39.455562 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-27 22:13:39.455578 | orchestrator | Saturday 27 September 2025 22:12:01 +0000 (0:00:00.125) 0:00:09.315 **** 2025-09-27 22:13:39.455596 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.455612 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:13:39.455629 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:13:39.455662 | orchestrator | 2025-09-27 22:13:39.455680 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-27 22:13:39.455697 | orchestrator | Saturday 27 September 2025 22:12:01 +0000 (0:00:00.269) 0:00:09.585 **** 2025-09-27 22:13:39.455714 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:13:39.455730 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:13:39.455741 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:13:39.455750 | orchestrator | 2025-09-27 22:13:39.455760 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-27 22:13:39.455769 | orchestrator | Saturday 27 September 2025 22:12:01 +0000 (0:00:00.292) 0:00:09.877 **** 2025-09-27 22:13:39.455784 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.455794 | orchestrator | 2025-09-27 22:13:39.455804 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-27 22:13:39.455813 | orchestrator | Saturday 27 September 2025 22:12:01 +0000 (0:00:00.123) 0:00:10.001 **** 2025-09-27 22:13:39.455823 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.455832 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:13:39.455842 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:13:39.455851 | orchestrator | 2025-09-27 22:13:39.455861 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-27 22:13:39.455870 | orchestrator | Saturday 27 September 2025 22:12:02 +0000 (0:00:00.313) 0:00:10.315 **** 2025-09-27 22:13:39.455880 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:13:39.455890 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:13:39.455899 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:13:39.455909 | orchestrator | 2025-09-27 22:13:39.455918 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-27 22:13:39.455928 | orchestrator | Saturday 27 September 2025 22:12:02 +0000 (0:00:00.531) 0:00:10.846 **** 2025-09-27 22:13:39.455938 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.455947 | orchestrator | 2025-09-27 22:13:39.455964 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-27 22:13:39.455993 | orchestrator | Saturday 27 September 2025 22:12:02 +0000 (0:00:00.121) 0:00:10.967 **** 2025-09-27 22:13:39.456009 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.456019 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:13:39.456029 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:13:39.456038 | orchestrator | 2025-09-27 22:13:39.456048 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-27 22:13:39.456058 | orchestrator | Saturday 27 September 2025 22:12:03 +0000 (0:00:00.292) 0:00:11.260 **** 2025-09-27 22:13:39.456067 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:13:39.456077 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:13:39.456086 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:13:39.456096 | orchestrator | 2025-09-27 22:13:39.456106 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-27 22:13:39.456115 | orchestrator | Saturday 27 September 2025 22:12:03 +0000 (0:00:00.316) 0:00:11.576 **** 2025-09-27 22:13:39.456125 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.456134 | orchestrator | 2025-09-27 22:13:39.456144 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-27 22:13:39.456153 | orchestrator | Saturday 27 September 2025 22:12:03 +0000 (0:00:00.120) 0:00:11.697 **** 2025-09-27 22:13:39.456163 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.456173 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:13:39.456182 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:13:39.456192 | orchestrator | 2025-09-27 22:13:39.456201 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-27 22:13:39.456211 | orchestrator | Saturday 27 September 2025 22:12:04 +0000 (0:00:00.470) 0:00:12.167 **** 2025-09-27 22:13:39.456220 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:13:39.456230 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:13:39.456240 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:13:39.456257 | orchestrator | 2025-09-27 22:13:39.456267 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-27 22:13:39.456276 | orchestrator | Saturday 27 September 2025 22:12:05 +0000 (0:00:01.606) 0:00:13.774 **** 2025-09-27 22:13:39.456286 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-27 22:13:39.456296 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-27 22:13:39.456305 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-27 22:13:39.456315 | orchestrator | 2025-09-27 22:13:39.456324 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-27 22:13:39.456334 | orchestrator | Saturday 27 September 2025 22:12:07 +0000 (0:00:01.768) 0:00:15.543 **** 2025-09-27 22:13:39.456344 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-27 22:13:39.456353 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-27 22:13:39.456363 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-27 22:13:39.456373 | orchestrator | 2025-09-27 22:13:39.456383 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-27 22:13:39.456392 | orchestrator | Saturday 27 September 2025 22:12:09 +0000 (0:00:02.056) 0:00:17.600 **** 2025-09-27 22:13:39.456402 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-27 22:13:39.456411 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-27 22:13:39.456421 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-27 22:13:39.456430 | orchestrator | 2025-09-27 22:13:39.456440 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-27 22:13:39.456449 | orchestrator | Saturday 27 September 2025 22:12:11 +0000 (0:00:01.975) 0:00:19.575 **** 2025-09-27 22:13:39.456459 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.456468 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:13:39.456478 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:13:39.456488 | orchestrator | 2025-09-27 22:13:39.456497 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-27 22:13:39.456507 | orchestrator | Saturday 27 September 2025 22:12:11 +0000 (0:00:00.313) 0:00:19.889 **** 2025-09-27 22:13:39.456517 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.456526 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:13:39.456535 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:13:39.456545 | orchestrator | 2025-09-27 22:13:39.456560 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-27 22:13:39.456570 | orchestrator | Saturday 27 September 2025 22:12:12 +0000 (0:00:00.294) 0:00:20.183 **** 2025-09-27 22:13:39.456579 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:13:39.456589 | orchestrator | 2025-09-27 22:13:39.456598 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-27 22:13:39.456607 | orchestrator | Saturday 27 September 2025 22:12:12 +0000 (0:00:00.565) 0:00:20.749 **** 2025-09-27 22:13:39.456629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 22:13:39.456653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 22:13:39.456673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 22:13:39.456692 | orchestrator | 2025-09-27 22:13:39.456702 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-27 22:13:39.456712 | orchestrator | Saturday 27 September 2025 22:12:14 +0000 (0:00:01.634) 0:00:22.384 **** 2025-09-27 22:13:39.456733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-27 22:13:39.456750 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.456760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-27 22:13:39.456771 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:13:39.456793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-27 22:13:39.456810 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:13:39.456821 | orchestrator | 2025-09-27 22:13:39.456830 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-27 22:13:39.456840 | orchestrator | Saturday 27 September 2025 22:12:14 +0000 (0:00:00.613) 0:00:22.998 **** 2025-09-27 22:13:39.456850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-27 22:13:39.456860 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.456882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-27 22:13:39.456898 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:13:39.456909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-27 22:13:39.456919 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:13:39.456928 | orchestrator | 2025-09-27 22:13:39.456938 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-27 22:13:39.456951 | orchestrator | Saturday 27 September 2025 22:12:15 +0000 (0:00:00.782) 0:00:23.780 **** 2025-09-27 22:13:39.456969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 22:13:39.457039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 22:13:39.457066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 22:13:39.457077 | orchestrator | 2025-09-27 22:13:39.457087 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-27 22:13:39.457097 | orchestrator | Saturday 27 September 2025 22:12:17 +0000 (0:00:01.508) 0:00:25.289 **** 2025-09-27 22:13:39.457106 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:13:39.457116 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:13:39.457126 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:13:39.457135 | orchestrator | 2025-09-27 22:13:39.457144 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-27 22:13:39.457154 | orchestrator | Saturday 27 September 2025 22:12:17 +0000 (0:00:00.303) 0:00:25.592 **** 2025-09-27 22:13:39.457164 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:13:39.457173 | orchestrator | 2025-09-27 22:13:39.457182 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-27 22:13:39.457192 | orchestrator | Saturday 27 September 2025 22:12:17 +0000 (0:00:00.487) 0:00:26.080 **** 2025-09-27 22:13:39.457201 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:13:39.457211 | orchestrator | 2025-09-27 22:13:39.457220 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-27 22:13:39.457230 | orchestrator | Saturday 27 September 2025 22:12:20 +0000 (0:00:02.273) 0:00:28.354 **** 2025-09-27 22:13:39.457239 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:13:39.457249 | orchestrator | 2025-09-27 22:13:39.457258 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-27 22:13:39.457268 | orchestrator | Saturday 27 September 2025 22:12:22 +0000 (0:00:02.608) 0:00:30.962 **** 2025-09-27 22:13:39.457277 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:13:39.457286 | orchestrator | 2025-09-27 22:13:39.457296 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-27 22:13:39.457310 | orchestrator | Saturday 27 September 2025 22:12:37 +0000 (0:00:15.007) 0:00:45.969 **** 2025-09-27 22:13:39.457320 | orchestrator | 2025-09-27 22:13:39.457330 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-27 22:13:39.457339 | orchestrator | Saturday 27 September 2025 22:12:37 +0000 (0:00:00.063) 0:00:46.032 **** 2025-09-27 22:13:39.457349 | orchestrator | 2025-09-27 22:13:39.457358 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-27 22:13:39.457367 | orchestrator | Saturday 27 September 2025 22:12:37 +0000 (0:00:00.072) 0:00:46.105 **** 2025-09-27 22:13:39.457377 | orchestrator | 2025-09-27 22:13:39.457390 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-27 22:13:39.457401 | orchestrator | Saturday 27 September 2025 22:12:38 +0000 (0:00:00.065) 0:00:46.171 **** 2025-09-27 22:13:39.457410 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:13:39.457420 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:13:39.457429 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:13:39.457438 | orchestrator | 2025-09-27 22:13:39.457448 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:13:39.457457 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-27 22:13:39.457467 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-27 22:13:39.457477 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-27 22:13:39.457486 | orchestrator | 2025-09-27 22:13:39.457496 | orchestrator | 2025-09-27 22:13:39.457510 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:13:39.457520 | orchestrator | Saturday 27 September 2025 22:13:39 +0000 (0:01:00.977) 0:01:47.149 **** 2025-09-27 22:13:39.457529 | orchestrator | =============================================================================== 2025-09-27 22:13:39.457539 | orchestrator | horizon : Restart horizon container ------------------------------------ 60.98s 2025-09-27 22:13:39.457547 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.01s 2025-09-27 22:13:39.457554 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.61s 2025-09-27 22:13:39.457562 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.27s 2025-09-27 22:13:39.457570 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.06s 2025-09-27 22:13:39.457577 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.98s 2025-09-27 22:13:39.457585 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.77s 2025-09-27 22:13:39.457592 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.63s 2025-09-27 22:13:39.457600 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.61s 2025-09-27 22:13:39.457608 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.51s 2025-09-27 22:13:39.457615 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.19s 2025-09-27 22:13:39.457623 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.78s 2025-09-27 22:13:39.457631 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2025-09-27 22:13:39.457639 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.61s 2025-09-27 22:13:39.457646 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2025-09-27 22:13:39.457654 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2025-09-27 22:13:39.457662 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2025-09-27 22:13:39.457669 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.49s 2025-09-27 22:13:39.457682 | orchestrator | horizon : Update policy file name --------------------------------------- 0.49s 2025-09-27 22:13:39.457690 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.47s 2025-09-27 22:13:39.457697 | orchestrator | 2025-09-27 22:13:39 | INFO  | Task 49cd45d0-f7a7-454c-a1d4-5dc9a9a6156a is in state STARTED 2025-09-27 22:13:39.457705 | orchestrator | 2025-09-27 22:13:39 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:13:42.497834 | orchestrator | 2025-09-27 22:13:42 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:13:42.499918 | orchestrator | 2025-09-27 22:13:42 | INFO  | Task 49cd45d0-f7a7-454c-a1d4-5dc9a9a6156a is in state STARTED 2025-09-27 22:13:42.500031 | orchestrator | 2025-09-27 22:13:42 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:13:45.548690 | orchestrator | 2025-09-27 22:13:45 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:13:45.551145 | orchestrator | 2025-09-27 22:13:45 | INFO  | Task 49cd45d0-f7a7-454c-a1d4-5dc9a9a6156a is in state STARTED 2025-09-27 22:13:45.551197 | orchestrator | 2025-09-27 22:13:45 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:13:48.592485 | orchestrator | 2025-09-27 22:13:48 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:13:48.594484 | orchestrator | 2025-09-27 22:13:48 | INFO  | Task 49cd45d0-f7a7-454c-a1d4-5dc9a9a6156a is in state STARTED 2025-09-27 22:13:48.594544 | orchestrator | 2025-09-27 22:13:48 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:13:51.633148 | orchestrator | 2025-09-27 22:13:51 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:13:51.635803 | orchestrator | 2025-09-27 22:13:51 | INFO  | Task 49cd45d0-f7a7-454c-a1d4-5dc9a9a6156a is in state STARTED 2025-09-27 22:13:51.635832 | orchestrator | 2025-09-27 22:13:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:13:54.672632 | orchestrator | 2025-09-27 22:13:54 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:13:54.673651 | orchestrator | 2025-09-27 22:13:54 | INFO  | Task 49cd45d0-f7a7-454c-a1d4-5dc9a9a6156a is in state STARTED 2025-09-27 22:13:54.673839 | orchestrator | 2025-09-27 22:13:54 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:13:57.714266 | orchestrator | 2025-09-27 22:13:57 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:13:57.715304 | orchestrator | 2025-09-27 22:13:57 | INFO  | Task 49cd45d0-f7a7-454c-a1d4-5dc9a9a6156a is in state STARTED 2025-09-27 22:13:57.715345 | orchestrator | 2025-09-27 22:13:57 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:14:00.752952 | orchestrator | 2025-09-27 22:14:00 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:14:00.755007 | orchestrator | 2025-09-27 22:14:00 | INFO  | Task 49cd45d0-f7a7-454c-a1d4-5dc9a9a6156a is in state STARTED 2025-09-27 22:14:00.755049 | orchestrator | 2025-09-27 22:14:00 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:14:03.796036 | orchestrator | 2025-09-27 22:14:03 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:14:03.796175 | orchestrator | 2025-09-27 22:14:03 | INFO  | Task 49cd45d0-f7a7-454c-a1d4-5dc9a9a6156a is in state STARTED 2025-09-27 22:14:03.796203 | orchestrator | 2025-09-27 22:14:03 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:14:06.831524 | orchestrator | 2025-09-27 22:14:06 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:14:06.832261 | orchestrator | 2025-09-27 22:14:06 | INFO  | Task 49cd45d0-f7a7-454c-a1d4-5dc9a9a6156a is in state STARTED 2025-09-27 22:14:06.832298 | orchestrator | 2025-09-27 22:14:06 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:14:09.868317 | orchestrator | 2025-09-27 22:14:09 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:14:09.869222 | orchestrator | 2025-09-27 22:14:09 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:14:09.871175 | orchestrator | 2025-09-27 22:14:09 | INFO  | Task 9a161ccf-d386-4b0d-a426-6e3a4bd441ad is in state STARTED 2025-09-27 22:14:09.873025 | orchestrator | 2025-09-27 22:14:09 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:14:09.875999 | orchestrator | 2025-09-27 22:14:09 | INFO  | Task 49cd45d0-f7a7-454c-a1d4-5dc9a9a6156a is in state SUCCESS 2025-09-27 22:14:09.876116 | orchestrator | 2025-09-27 22:14:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:14:12.911675 | orchestrator | 2025-09-27 22:14:12 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:14:12.912127 | orchestrator | 2025-09-27 22:14:12 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:14:12.912440 | orchestrator | 2025-09-27 22:14:12 | INFO  | Task 9a161ccf-d386-4b0d-a426-6e3a4bd441ad is in state STARTED 2025-09-27 22:14:12.914080 | orchestrator | 2025-09-27 22:14:12 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:14:12.914133 | orchestrator | 2025-09-27 22:14:12 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:14:15.947728 | orchestrator | 2025-09-27 22:14:15 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:14:15.949545 | orchestrator | 2025-09-27 22:14:15 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:14:15.949753 | orchestrator | 2025-09-27 22:14:15 | INFO  | Task 9a161ccf-d386-4b0d-a426-6e3a4bd441ad is in state SUCCESS 2025-09-27 22:14:15.950475 | orchestrator | 2025-09-27 22:14:15 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:14:15.951166 | orchestrator | 2025-09-27 22:14:15 | INFO  | Task 474f60a2-f16e-41af-8528-76df54ed700d is in state STARTED 2025-09-27 22:14:15.956675 | orchestrator | 2025-09-27 22:14:15 | INFO  | Task 05acd74c-fb80-48f5-8a73-07bb3b5278ae is in state STARTED 2025-09-27 22:14:15.956754 | orchestrator | 2025-09-27 22:14:15 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:14:18.983871 | orchestrator | 2025-09-27 22:14:18 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:14:18.984281 | orchestrator | 2025-09-27 22:14:18 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:14:18.984817 | orchestrator | 2025-09-27 22:14:18 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:14:18.985420 | orchestrator | 2025-09-27 22:14:18 | INFO  | Task 474f60a2-f16e-41af-8528-76df54ed700d is in state STARTED 2025-09-27 22:14:18.986309 | orchestrator | 2025-09-27 22:14:18 | INFO  | Task 05acd74c-fb80-48f5-8a73-07bb3b5278ae is in state STARTED 2025-09-27 22:14:18.986376 | orchestrator | 2025-09-27 22:14:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:14:22.014497 | orchestrator | 2025-09-27 22:14:22 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:14:22.014733 | orchestrator | 2025-09-27 22:14:22 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:14:22.017208 | orchestrator | 2025-09-27 22:14:22 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:14:22.018145 | orchestrator | 2025-09-27 22:14:22 | INFO  | Task 474f60a2-f16e-41af-8528-76df54ed700d is in state STARTED 2025-09-27 22:14:22.018870 | orchestrator | 2025-09-27 22:14:22 | INFO  | Task 05acd74c-fb80-48f5-8a73-07bb3b5278ae is in state STARTED 2025-09-27 22:14:22.019063 | orchestrator | 2025-09-27 22:14:22 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:14:25.049381 | orchestrator | 2025-09-27 22:14:25 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:14:25.050659 | orchestrator | 2025-09-27 22:14:25 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:14:25.051230 | orchestrator | 2025-09-27 22:14:25 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:14:25.052004 | orchestrator | 2025-09-27 22:14:25 | INFO  | Task 474f60a2-f16e-41af-8528-76df54ed700d is in state STARTED 2025-09-27 22:14:25.052594 | orchestrator | 2025-09-27 22:14:25 | INFO  | Task 05acd74c-fb80-48f5-8a73-07bb3b5278ae is in state STARTED 2025-09-27 22:14:25.052757 | orchestrator | 2025-09-27 22:14:25 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:14:28.087365 | orchestrator | 2025-09-27 22:14:28 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:14:28.087454 | orchestrator | 2025-09-27 22:14:28 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state STARTED 2025-09-27 22:14:28.087723 | orchestrator | 2025-09-27 22:14:28 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:14:28.088816 | orchestrator | 2025-09-27 22:14:28 | INFO  | Task 474f60a2-f16e-41af-8528-76df54ed700d is in state STARTED 2025-09-27 22:14:28.089870 | orchestrator | 2025-09-27 22:14:28 | INFO  | Task 05acd74c-fb80-48f5-8a73-07bb3b5278ae is in state STARTED 2025-09-27 22:14:28.089904 | orchestrator | 2025-09-27 22:14:28 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:14:31.125389 | orchestrator | 2025-09-27 22:14:31 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:14:31.125758 | orchestrator | 2025-09-27 22:14:31 | INFO  | Task b70d477d-9b7a-4aea-b8d5-33f024870034 is in state SUCCESS 2025-09-27 22:14:31.127386 | orchestrator | 2025-09-27 22:14:31.127418 | orchestrator | 2025-09-27 22:14:31.127425 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-27 22:14:31.127434 | orchestrator | 2025-09-27 22:14:31.127438 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-27 22:14:31.127443 | orchestrator | Saturday 27 September 2025 22:13:17 +0000 (0:00:00.227) 0:00:00.227 **** 2025-09-27 22:14:31.127447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-27 22:14:31.127453 | orchestrator | 2025-09-27 22:14:31.127457 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-27 22:14:31.127461 | orchestrator | Saturday 27 September 2025 22:13:17 +0000 (0:00:00.237) 0:00:00.465 **** 2025-09-27 22:14:31.127465 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-27 22:14:31.127469 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-27 22:14:31.127473 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-27 22:14:31.127479 | orchestrator | 2025-09-27 22:14:31.127486 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-27 22:14:31.127492 | orchestrator | Saturday 27 September 2025 22:13:19 +0000 (0:00:01.171) 0:00:01.636 **** 2025-09-27 22:14:31.127499 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-27 22:14:31.127521 | orchestrator | 2025-09-27 22:14:31.127529 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-27 22:14:31.127532 | orchestrator | Saturday 27 September 2025 22:13:20 +0000 (0:00:01.138) 0:00:02.775 **** 2025-09-27 22:14:31.127536 | orchestrator | changed: [testbed-manager] 2025-09-27 22:14:31.127540 | orchestrator | 2025-09-27 22:14:31.127553 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-27 22:14:31.127559 | orchestrator | Saturday 27 September 2025 22:13:21 +0000 (0:00:00.962) 0:00:03.737 **** 2025-09-27 22:14:31.127565 | orchestrator | changed: [testbed-manager] 2025-09-27 22:14:31.127571 | orchestrator | 2025-09-27 22:14:31.127578 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-27 22:14:31.127584 | orchestrator | Saturday 27 September 2025 22:13:21 +0000 (0:00:00.873) 0:00:04.610 **** 2025-09-27 22:14:31.127590 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-27 22:14:31.127596 | orchestrator | ok: [testbed-manager] 2025-09-27 22:14:31.127602 | orchestrator | 2025-09-27 22:14:31.127609 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-27 22:14:31.127616 | orchestrator | Saturday 27 September 2025 22:13:59 +0000 (0:00:37.618) 0:00:42.229 **** 2025-09-27 22:14:31.127623 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-27 22:14:31.127630 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-27 22:14:31.127637 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-27 22:14:31.127644 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-27 22:14:31.127650 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-27 22:14:31.127657 | orchestrator | 2025-09-27 22:14:31.127663 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-27 22:14:31.127670 | orchestrator | Saturday 27 September 2025 22:14:03 +0000 (0:00:03.510) 0:00:45.740 **** 2025-09-27 22:14:31.127676 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-27 22:14:31.127683 | orchestrator | 2025-09-27 22:14:31.127689 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-27 22:14:31.127695 | orchestrator | Saturday 27 September 2025 22:14:03 +0000 (0:00:00.434) 0:00:46.174 **** 2025-09-27 22:14:31.127702 | orchestrator | skipping: [testbed-manager] 2025-09-27 22:14:31.127750 | orchestrator | 2025-09-27 22:14:31.127759 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-27 22:14:31.127772 | orchestrator | Saturday 27 September 2025 22:14:03 +0000 (0:00:00.139) 0:00:46.313 **** 2025-09-27 22:14:31.127779 | orchestrator | skipping: [testbed-manager] 2025-09-27 22:14:31.127784 | orchestrator | 2025-09-27 22:14:31.127790 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-27 22:14:31.127797 | orchestrator | Saturday 27 September 2025 22:14:03 +0000 (0:00:00.251) 0:00:46.565 **** 2025-09-27 22:14:31.127803 | orchestrator | changed: [testbed-manager] 2025-09-27 22:14:31.127809 | orchestrator | 2025-09-27 22:14:31.127815 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-27 22:14:31.127822 | orchestrator | Saturday 27 September 2025 22:14:05 +0000 (0:00:01.653) 0:00:48.218 **** 2025-09-27 22:14:31.127828 | orchestrator | changed: [testbed-manager] 2025-09-27 22:14:31.127834 | orchestrator | 2025-09-27 22:14:31.127841 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-27 22:14:31.127847 | orchestrator | Saturday 27 September 2025 22:14:06 +0000 (0:00:00.711) 0:00:48.930 **** 2025-09-27 22:14:31.127853 | orchestrator | changed: [testbed-manager] 2025-09-27 22:14:31.127858 | orchestrator | 2025-09-27 22:14:31.127865 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-27 22:14:31.127871 | orchestrator | Saturday 27 September 2025 22:14:06 +0000 (0:00:00.567) 0:00:49.497 **** 2025-09-27 22:14:31.127877 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-27 22:14:31.127884 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-27 22:14:31.127899 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-27 22:14:31.127906 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-27 22:14:31.127912 | orchestrator | 2025-09-27 22:14:31.127918 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:14:31.127922 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 22:14:31.127926 | orchestrator | 2025-09-27 22:14:31.127930 | orchestrator | 2025-09-27 22:14:31.127941 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:14:31.127969 | orchestrator | Saturday 27 September 2025 22:14:08 +0000 (0:00:01.236) 0:00:50.734 **** 2025-09-27 22:14:31.127973 | orchestrator | =============================================================================== 2025-09-27 22:14:31.127976 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 37.62s 2025-09-27 22:14:31.127980 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.51s 2025-09-27 22:14:31.127984 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.65s 2025-09-27 22:14:31.127988 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.24s 2025-09-27 22:14:31.127991 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.17s 2025-09-27 22:14:31.127995 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.14s 2025-09-27 22:14:31.127999 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.96s 2025-09-27 22:14:31.128003 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.87s 2025-09-27 22:14:31.128006 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.71s 2025-09-27 22:14:31.128010 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.57s 2025-09-27 22:14:31.128014 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.43s 2025-09-27 22:14:31.128019 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.25s 2025-09-27 22:14:31.128023 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2025-09-27 22:14:31.128027 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2025-09-27 22:14:31.128031 | orchestrator | 2025-09-27 22:14:31.128074 | orchestrator | 2025-09-27 22:14:31.128088 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:14:31.128104 | orchestrator | 2025-09-27 22:14:31.128155 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:14:31.128165 | orchestrator | Saturday 27 September 2025 22:14:11 +0000 (0:00:00.161) 0:00:00.161 **** 2025-09-27 22:14:31.128172 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:14:31.128178 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:14:31.128182 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:14:31.128189 | orchestrator | 2025-09-27 22:14:31.128195 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:14:31.128201 | orchestrator | Saturday 27 September 2025 22:14:12 +0000 (0:00:00.267) 0:00:00.429 **** 2025-09-27 22:14:31.128207 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-27 22:14:31.128220 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-27 22:14:31.128227 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-27 22:14:31.128234 | orchestrator | 2025-09-27 22:14:31.128241 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-27 22:14:31.128275 | orchestrator | 2025-09-27 22:14:31.128289 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-27 22:14:31.128296 | orchestrator | Saturday 27 September 2025 22:14:12 +0000 (0:00:00.538) 0:00:00.968 **** 2025-09-27 22:14:31.128309 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:14:31.128315 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:14:31.128322 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:14:31.128335 | orchestrator | 2025-09-27 22:14:31.128341 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:14:31.128348 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:14:31.128355 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:14:31.128361 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:14:31.128368 | orchestrator | 2025-09-27 22:14:31.128373 | orchestrator | 2025-09-27 22:14:31.128380 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:14:31.128386 | orchestrator | Saturday 27 September 2025 22:14:13 +0000 (0:00:00.656) 0:00:01.624 **** 2025-09-27 22:14:31.128392 | orchestrator | =============================================================================== 2025-09-27 22:14:31.128399 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.66s 2025-09-27 22:14:31.128405 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2025-09-27 22:14:31.128412 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2025-09-27 22:14:31.128418 | orchestrator | 2025-09-27 22:14:31.128424 | orchestrator | 2025-09-27 22:14:31.128430 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:14:31.128437 | orchestrator | 2025-09-27 22:14:31.128443 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:14:31.128450 | orchestrator | Saturday 27 September 2025 22:11:52 +0000 (0:00:00.263) 0:00:00.263 **** 2025-09-27 22:14:31.128456 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:14:31.128463 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:14:31.128469 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:14:31.128476 | orchestrator | 2025-09-27 22:14:31.128482 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:14:31.128488 | orchestrator | Saturday 27 September 2025 22:11:52 +0000 (0:00:00.274) 0:00:00.538 **** 2025-09-27 22:14:31.128494 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-27 22:14:31.128501 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-27 22:14:31.128507 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-27 22:14:31.128514 | orchestrator | 2025-09-27 22:14:31.128521 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-27 22:14:31.128527 | orchestrator | 2025-09-27 22:14:31.128540 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-27 22:14:31.128547 | orchestrator | Saturday 27 September 2025 22:11:52 +0000 (0:00:00.417) 0:00:00.955 **** 2025-09-27 22:14:31.128553 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:14:31.128559 | orchestrator | 2025-09-27 22:14:31.128566 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-27 22:14:31.128572 | orchestrator | Saturday 27 September 2025 22:11:53 +0000 (0:00:00.520) 0:00:01.476 **** 2025-09-27 22:14:31.128588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 22:14:31.128605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 22:14:31.128614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 22:14:31.128622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 22:14:31.128635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 22:14:31.128642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 22:14:31.128657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 22:14:31.128665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 22:14:31.128671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 22:14:31.128677 | orchestrator | 2025-09-27 22:14:31.128683 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-27 22:14:31.128689 | orchestrator | Saturday 27 September 2025 22:11:55 +0000 (0:00:01.780) 0:00:03.256 **** 2025-09-27 22:14:31.128695 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-27 22:14:31.128701 | orchestrator | 2025-09-27 22:14:31.128706 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-27 22:14:31.128712 | orchestrator | Saturday 27 September 2025 22:11:55 +0000 (0:00:00.850) 0:00:04.107 **** 2025-09-27 22:14:31.128718 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:14:31.128724 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:14:31.128730 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:14:31.128736 | orchestrator | 2025-09-27 22:14:31.128742 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-27 22:14:31.128748 | orchestrator | Saturday 27 September 2025 22:11:56 +0000 (0:00:00.518) 0:00:04.626 **** 2025-09-27 22:14:31.128763 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 22:14:31.128770 | orchestrator | 2025-09-27 22:14:31.128782 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-27 22:14:31.128788 | orchestrator | Saturday 27 September 2025 22:11:57 +0000 (0:00:00.682) 0:00:05.309 **** 2025-09-27 22:14:31.128795 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:14:31.128802 | orchestrator | 2025-09-27 22:14:31.128813 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-27 22:14:31.128820 | orchestrator | Saturday 27 September 2025 22:11:57 +0000 (0:00:00.554) 0:00:05.864 **** 2025-09-27 22:14:31.128828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 22:14:31.128842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 22:14:31.128850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 22:14:31.128858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 22:14:31.128922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 22:14:31.128960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 22:14:31.128973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 22:14:31.128981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 22:14:31.128988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 22:14:31.128995 | orchestrator | 2025-09-27 22:14:31.129002 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-27 22:14:31.129009 | orchestrator | Saturday 27 September 2025 22:12:00 +0000 (0:00:03.048) 0:00:08.912 **** 2025-09-27 22:14:31.129017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-27 22:14:31.129030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 22:14:31.129044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 22:14:31.129051 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:14:31.129062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-27 22:14:31.129070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 22:14:31.129078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 22:14:31.129084 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:14:31.129095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-27 22:14:31.129107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 22:14:31.129117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 22:14:31.129124 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:14:31.129130 | orchestrator | 2025-09-27 22:14:31.129137 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-27 22:14:31.129143 | orchestrator | Saturday 27 September 2025 22:12:01 +0000 (0:00:00.774) 0:00:09.686 **** 2025-09-27 22:14:31.129150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-27 22:14:31.129158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 22:14:31.129164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 22:14:31.129175 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:14:31.129428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-27 22:14:31.129460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 22:14:31.129469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 22:14:31.129476 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:14:31.129483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-27 22:14:31.129490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 22:14:31.129508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 22:14:31.129515 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:14:31.129522 | orchestrator | 2025-09-27 22:14:31.129528 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-27 22:14:31.129535 | orchestrator | Saturday 27 September 2025 22:12:02 +0000 (0:00:00.724) 0:00:10.411 **** 2025-09-27 22:14:31.129546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 22:14:31.129554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 22:14:31.129562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 22:14:31.129578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 22:14:31.129585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 22:14:31.129592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 22:14:31.129602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 22:14:31.129609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 22:14:31.129616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 22:14:31.129627 | orchestrator | 2025-09-27 22:14:31.129634 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-27 22:14:31.129641 | orchestrator | Saturday 27 September 2025 22:12:05 +0000 (0:00:03.377) 0:00:13.789 **** 2025-09-27 22:14:31.129653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 22:14:31.129660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 22:14:31.129674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 22:14:31.129682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 22:14:31.129689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 22:14:31.129701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 22:14:31.129728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 22:14:31.129736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 22:14:31.129746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 22:14:31.129753 | orchestrator | 2025-09-27 22:14:31.129759 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-27 22:14:31.129765 | orchestrator | Saturday 27 September 2025 22:12:10 +0000 (0:00:05.137) 0:00:18.926 **** 2025-09-27 22:14:31.129772 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:14:31.129779 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:14:31.129785 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:14:31.129792 | orchestrator | 2025-09-27 22:14:31.129798 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-27 22:14:31.129805 | orchestrator | Saturday 27 September 2025 22:12:12 +0000 (0:00:01.469) 0:00:20.396 **** 2025-09-27 22:14:31.129812 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:14:31.129818 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:14:31.129825 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:14:31.129836 | orchestrator | 2025-09-27 22:14:31.129843 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-27 22:14:31.129850 | orchestrator | Saturday 27 September 2025 22:12:12 +0000 (0:00:00.514) 0:00:20.911 **** 2025-09-27 22:14:31.129856 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:14:31.129863 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:14:31.129870 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:14:31.129877 | orchestrator | 2025-09-27 22:14:31.129883 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-27 22:14:31.129890 | orchestrator | Saturday 27 September 2025 22:12:13 +0000 (0:00:00.311) 0:00:21.222 **** 2025-09-27 22:14:31.129896 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:14:31.129903 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:14:31.129910 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:14:31.129916 | orchestrator | 2025-09-27 22:14:31.129923 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-27 22:14:31.129929 | orchestrator | Saturday 27 September 2025 22:12:13 +0000 (0:00:00.524) 0:00:21.747 **** 2025-09-27 22:14:31.129936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 22:14:31.129964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 22:14:31.129976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 22:14:31.129984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 22:14:31.129996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 22:14:31.130003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 22:14:31.130053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 22:14:31.130064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 22:14:31.130075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 22:14:31.130082 | orchestrator | 2025-09-27 22:14:31.130094 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-27 22:14:31.130101 | orchestrator | Saturday 27 September 2025 22:12:15 +0000 (0:00:02.287) 0:00:24.035 **** 2025-09-27 22:14:31.130108 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:14:31.130115 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:14:31.130122 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:14:31.130129 | orchestrator | 2025-09-27 22:14:31.130136 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-27 22:14:31.130143 | orchestrator | Saturday 27 September 2025 22:12:16 +0000 (0:00:00.338) 0:00:24.374 **** 2025-09-27 22:14:31.130150 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-27 22:14:31.130157 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-27 22:14:31.130164 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-27 22:14:31.130170 | orchestrator | 2025-09-27 22:14:31.130177 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-27 22:14:31.130183 | orchestrator | Saturday 27 September 2025 22:12:17 +0000 (0:00:01.561) 0:00:25.936 **** 2025-09-27 22:14:31.130190 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 22:14:31.130196 | orchestrator | 2025-09-27 22:14:31.130203 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-27 22:14:31.130210 | orchestrator | Saturday 27 September 2025 22:12:18 +0000 (0:00:00.887) 0:00:26.823 **** 2025-09-27 22:14:31.130216 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:14:31.130222 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:14:31.130228 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:14:31.130234 | orchestrator | 2025-09-27 22:14:31.130240 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-27 22:14:31.130246 | orchestrator | Saturday 27 September 2025 22:12:19 +0000 (0:00:00.780) 0:00:27.604 **** 2025-09-27 22:14:31.130254 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 22:14:31.130261 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-27 22:14:31.130267 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-27 22:14:31.130273 | orchestrator | 2025-09-27 22:14:31.130279 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-27 22:14:31.130283 | orchestrator | Saturday 27 September 2025 22:12:20 +0000 (0:00:01.098) 0:00:28.702 **** 2025-09-27 22:14:31.130287 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:14:31.130291 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:14:31.130297 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:14:31.130303 | orchestrator | 2025-09-27 22:14:31.130309 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-27 22:14:31.130316 | orchestrator | Saturday 27 September 2025 22:12:20 +0000 (0:00:00.306) 0:00:29.009 **** 2025-09-27 22:14:31.130322 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-27 22:14:31.130328 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-27 22:14:31.130335 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-27 22:14:31.130342 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-27 22:14:31.130348 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-27 22:14:31.130360 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-27 22:14:31.130366 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-27 22:14:31.130373 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-27 22:14:31.130379 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-27 22:14:31.130391 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-27 22:14:31.130397 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-27 22:14:31.130404 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-27 22:14:31.130411 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-27 22:14:31.130417 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-27 22:14:31.130424 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-27 22:14:31.130431 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-27 22:14:31.130438 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-27 22:14:31.130445 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-27 22:14:31.130454 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-27 22:14:31.130461 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-27 22:14:31.130468 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-27 22:14:31.130475 | orchestrator | 2025-09-27 22:14:31.130481 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-27 22:14:31.130487 | orchestrator | Saturday 27 September 2025 22:12:29 +0000 (0:00:09.032) 0:00:38.041 **** 2025-09-27 22:14:31.130493 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-27 22:14:31.130500 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-27 22:14:31.130505 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-27 22:14:31.130511 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-27 22:14:31.130517 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-27 22:14:31.130523 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-27 22:14:31.130528 | orchestrator | 2025-09-27 22:14:31.130535 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-27 22:14:31.130541 | orchestrator | Saturday 27 September 2025 22:12:32 +0000 (0:00:02.620) 0:00:40.662 **** 2025-09-27 22:14:31.130547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 22:14:31.130560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 22:14:31.130577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 22:14:31.130586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 22:14:31.130593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 22:14:31.130600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 22:14:31.130607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 22:14:31.130624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 22:14:31.130631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 22:14:31.130638 | orchestrator | 2025-09-27 22:14:31.130644 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-27 22:14:31.130650 | orchestrator | Saturday 27 September 2025 22:12:34 +0000 (0:00:02.189) 0:00:42.852 **** 2025-09-27 22:14:31.130657 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:14:31.130663 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:14:31.130670 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:14:31.130676 | orchestrator | 2025-09-27 22:14:31.130685 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-27 22:14:31.130692 | orchestrator | Saturday 27 September 2025 22:12:35 +0000 (0:00:00.289) 0:00:43.141 **** 2025-09-27 22:14:31.130698 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:14:31.130704 | orchestrator | 2025-09-27 22:14:31.130710 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-27 22:14:31.130717 | orchestrator | Saturday 27 September 2025 22:12:37 +0000 (0:00:02.351) 0:00:45.493 **** 2025-09-27 22:14:31.130723 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:14:31.130729 | orchestrator | 2025-09-27 22:14:31.130735 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-27 22:14:31.130741 | orchestrator | Saturday 27 September 2025 22:12:39 +0000 (0:00:02.379) 0:00:47.872 **** 2025-09-27 22:14:31.130747 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:14:31.130753 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:14:31.130758 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:14:31.130764 | orchestrator | 2025-09-27 22:14:31.130770 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-27 22:14:31.130776 | orchestrator | Saturday 27 September 2025 22:12:40 +0000 (0:00:01.036) 0:00:48.908 **** 2025-09-27 22:14:31.130782 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:14:31.130788 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:14:31.130795 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:14:31.130801 | orchestrator | 2025-09-27 22:14:31.130807 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-27 22:14:31.130814 | orchestrator | Saturday 27 September 2025 22:12:41 +0000 (0:00:00.488) 0:00:49.397 **** 2025-09-27 22:14:31.130820 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:14:31.130826 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:14:31.130832 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:14:31.130838 | orchestrator | 2025-09-27 22:14:31.130844 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-27 22:14:31.130855 | orchestrator | Saturday 27 September 2025 22:12:41 +0000 (0:00:00.411) 0:00:49.808 **** 2025-09-27 22:14:31.130861 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:14:31.130868 | orchestrator | 2025-09-27 22:14:31.130874 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-27 22:14:31.130880 | orchestrator | Saturday 27 September 2025 22:12:56 +0000 (0:00:15.218) 0:01:05.027 **** 2025-09-27 22:14:31.130886 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:14:31.130893 | orchestrator | 2025-09-27 22:14:31.130899 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-27 22:14:31.130904 | orchestrator | Saturday 27 September 2025 22:13:07 +0000 (0:00:10.349) 0:01:15.376 **** 2025-09-27 22:14:31.130908 | orchestrator | 2025-09-27 22:14:31.130912 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-27 22:14:31.130916 | orchestrator | Saturday 27 September 2025 22:13:07 +0000 (0:00:00.073) 0:01:15.450 **** 2025-09-27 22:14:31.130919 | orchestrator | 2025-09-27 22:14:31.130923 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-27 22:14:31.130927 | orchestrator | Saturday 27 September 2025 22:13:07 +0000 (0:00:00.065) 0:01:15.515 **** 2025-09-27 22:14:31.130930 | orchestrator | 2025-09-27 22:14:31.130934 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-27 22:14:31.130938 | orchestrator | Saturday 27 September 2025 22:13:07 +0000 (0:00:00.068) 0:01:15.584 **** 2025-09-27 22:14:31.130941 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:14:31.130986 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:14:31.130990 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:14:31.130994 | orchestrator | 2025-09-27 22:14:31.130998 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-27 22:14:31.131002 | orchestrator | Saturday 27 September 2025 22:13:30 +0000 (0:00:23.136) 0:01:38.720 **** 2025-09-27 22:14:31.131006 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:14:31.131009 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:14:31.131013 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:14:31.131017 | orchestrator | 2025-09-27 22:14:31.131021 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-27 22:14:31.131025 | orchestrator | Saturday 27 September 2025 22:13:35 +0000 (0:00:04.681) 0:01:43.402 **** 2025-09-27 22:14:31.131028 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:14:31.131032 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:14:31.131040 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:14:31.131044 | orchestrator | 2025-09-27 22:14:31.131047 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-27 22:14:31.131051 | orchestrator | Saturday 27 September 2025 22:13:42 +0000 (0:00:07.664) 0:01:51.067 **** 2025-09-27 22:14:31.131055 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:14:31.131059 | orchestrator | 2025-09-27 22:14:31.131062 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-27 22:14:31.131066 | orchestrator | Saturday 27 September 2025 22:13:43 +0000 (0:00:00.702) 0:01:51.770 **** 2025-09-27 22:14:31.131070 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:14:31.131074 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:14:31.131077 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:14:31.131081 | orchestrator | 2025-09-27 22:14:31.131085 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-27 22:14:31.131089 | orchestrator | Saturday 27 September 2025 22:13:44 +0000 (0:00:00.722) 0:01:52.492 **** 2025-09-27 22:14:31.131092 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:14:31.131096 | orchestrator | 2025-09-27 22:14:31.131100 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-27 22:14:31.131103 | orchestrator | Saturday 27 September 2025 22:13:46 +0000 (0:00:01.691) 0:01:54.184 **** 2025-09-27 22:14:31.131107 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-27 22:14:31.131117 | orchestrator | 2025-09-27 22:14:31.131120 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-27 22:14:31.131124 | orchestrator | Saturday 27 September 2025 22:13:56 +0000 (0:00:09.971) 0:02:04.155 **** 2025-09-27 22:14:31.131128 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-27 22:14:31.131132 | orchestrator | 2025-09-27 22:14:31.131138 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-27 22:14:31.131142 | orchestrator | Saturday 27 September 2025 22:14:19 +0000 (0:00:23.444) 0:02:27.600 **** 2025-09-27 22:14:31.131146 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-27 22:14:31.131150 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-27 22:14:31.131153 | orchestrator | 2025-09-27 22:14:31.131157 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-27 22:14:31.131161 | orchestrator | Saturday 27 September 2025 22:14:25 +0000 (0:00:06.161) 0:02:33.761 **** 2025-09-27 22:14:31.131164 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:14:31.131168 | orchestrator | 2025-09-27 22:14:31.131172 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-27 22:14:31.131176 | orchestrator | Saturday 27 September 2025 22:14:25 +0000 (0:00:00.112) 0:02:33.873 **** 2025-09-27 22:14:31.131179 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:14:31.131183 | orchestrator | 2025-09-27 22:14:31.131187 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-27 22:14:31.131190 | orchestrator | Saturday 27 September 2025 22:14:25 +0000 (0:00:00.109) 0:02:33.982 **** 2025-09-27 22:14:31.131194 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:14:31.131198 | orchestrator | 2025-09-27 22:14:31.131202 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-27 22:14:31.131205 | orchestrator | Saturday 27 September 2025 22:14:26 +0000 (0:00:00.179) 0:02:34.162 **** 2025-09-27 22:14:31.131209 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:14:31.131213 | orchestrator | 2025-09-27 22:14:31.131216 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-27 22:14:31.131220 | orchestrator | Saturday 27 September 2025 22:14:26 +0000 (0:00:00.532) 0:02:34.695 **** 2025-09-27 22:14:31.131224 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:14:31.131227 | orchestrator | 2025-09-27 22:14:31.131231 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-27 22:14:31.131235 | orchestrator | Saturday 27 September 2025 22:14:29 +0000 (0:00:02.914) 0:02:37.610 **** 2025-09-27 22:14:31.131239 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:14:31.131242 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:14:31.131246 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:14:31.131250 | orchestrator | 2025-09-27 22:14:31.131254 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:14:31.131258 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-27 22:14:31.131262 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-27 22:14:31.131266 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-27 22:14:31.131270 | orchestrator | 2025-09-27 22:14:31.131273 | orchestrator | 2025-09-27 22:14:31.131277 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:14:31.131281 | orchestrator | Saturday 27 September 2025 22:14:29 +0000 (0:00:00.454) 0:02:38.064 **** 2025-09-27 22:14:31.131285 | orchestrator | =============================================================================== 2025-09-27 22:14:31.131288 | orchestrator | service-ks-register : keystone | Creating services --------------------- 23.44s 2025-09-27 22:14:31.131294 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 23.14s 2025-09-27 22:14:31.131298 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.22s 2025-09-27 22:14:31.131302 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.35s 2025-09-27 22:14:31.131306 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.97s 2025-09-27 22:14:31.131311 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.03s 2025-09-27 22:14:31.131315 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.67s 2025-09-27 22:14:31.131319 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.16s 2025-09-27 22:14:31.131323 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.14s 2025-09-27 22:14:31.131326 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.68s 2025-09-27 22:14:31.131330 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.38s 2025-09-27 22:14:31.131334 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.05s 2025-09-27 22:14:31.131337 | orchestrator | keystone : Creating default user role ----------------------------------- 2.91s 2025-09-27 22:14:31.131341 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.62s 2025-09-27 22:14:31.131345 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.38s 2025-09-27 22:14:31.131349 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.35s 2025-09-27 22:14:31.131352 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.29s 2025-09-27 22:14:31.131356 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.19s 2025-09-27 22:14:31.131360 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.78s 2025-09-27 22:14:31.131363 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.69s 2025-09-27 22:14:31.131369 | orchestrator | 2025-09-27 22:14:31 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:14:31.131373 | orchestrator | 2025-09-27 22:14:31 | INFO  | Task 474f60a2-f16e-41af-8528-76df54ed700d is in state STARTED 2025-09-27 22:14:31.131377 | orchestrator | 2025-09-27 22:14:31 | INFO  | Task 05acd74c-fb80-48f5-8a73-07bb3b5278ae is in state STARTED 2025-09-27 22:14:31.131380 | orchestrator | 2025-09-27 22:14:31 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:14:34.171426 | orchestrator | 2025-09-27 22:14:34 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:14:34.172151 | orchestrator | 2025-09-27 22:14:34 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:14:34.172569 | orchestrator | 2025-09-27 22:14:34 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:14:34.174394 | orchestrator | 2025-09-27 22:14:34 | INFO  | Task 474f60a2-f16e-41af-8528-76df54ed700d is in state STARTED 2025-09-27 22:14:34.175005 | orchestrator | 2025-09-27 22:14:34 | INFO  | Task 05acd74c-fb80-48f5-8a73-07bb3b5278ae is in state STARTED 2025-09-27 22:14:34.175023 | orchestrator | 2025-09-27 22:14:34 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:14:37.211432 | orchestrator | 2025-09-27 22:14:37 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:14:37.212688 | orchestrator | 2025-09-27 22:14:37 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:14:37.213308 | orchestrator | 2025-09-27 22:14:37 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:14:37.214085 | orchestrator | 2025-09-27 22:14:37 | INFO  | Task 474f60a2-f16e-41af-8528-76df54ed700d is in state STARTED 2025-09-27 22:14:37.214756 | orchestrator | 2025-09-27 22:14:37 | INFO  | Task 05acd74c-fb80-48f5-8a73-07bb3b5278ae is in state STARTED 2025-09-27 22:14:37.214789 | orchestrator | 2025-09-27 22:14:37 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:14:40.236417 | orchestrator | 2025-09-27 22:14:40 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:14:40.236736 | orchestrator | 2025-09-27 22:14:40 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:14:40.237239 | orchestrator | 2025-09-27 22:14:40 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:14:40.237847 | orchestrator | 2025-09-27 22:14:40 | INFO  | Task 474f60a2-f16e-41af-8528-76df54ed700d is in state STARTED 2025-09-27 22:14:40.238395 | orchestrator | 2025-09-27 22:14:40 | INFO  | Task 05acd74c-fb80-48f5-8a73-07bb3b5278ae is in state STARTED 2025-09-27 22:14:40.238470 | orchestrator | 2025-09-27 22:14:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:14:43.270284 | orchestrator | 2025-09-27 22:14:43 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:14:43.270372 | orchestrator | 2025-09-27 22:14:43 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:14:43.270380 | orchestrator | 2025-09-27 22:14:43 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:14:43.270387 | orchestrator | 2025-09-27 22:14:43 | INFO  | Task 474f60a2-f16e-41af-8528-76df54ed700d is in state STARTED 2025-09-27 22:14:43.270392 | orchestrator | 2025-09-27 22:14:43 | INFO  | Task 05acd74c-fb80-48f5-8a73-07bb3b5278ae is in state STARTED 2025-09-27 22:14:43.270398 | orchestrator | 2025-09-27 22:14:43 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:14:46.290088 | orchestrator | 2025-09-27 22:14:46 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:14:46.290175 | orchestrator | 2025-09-27 22:14:46 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:14:46.290497 | orchestrator | 2025-09-27 22:14:46 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:14:46.290932 | orchestrator | 2025-09-27 22:14:46 | INFO  | Task 474f60a2-f16e-41af-8528-76df54ed700d is in state STARTED 2025-09-27 22:14:46.291456 | orchestrator | 2025-09-27 22:14:46 | INFO  | Task 05acd74c-fb80-48f5-8a73-07bb3b5278ae is in state STARTED 2025-09-27 22:14:46.291657 | orchestrator | 2025-09-27 22:14:46 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:14:49.365061 | orchestrator | 2025-09-27 22:14:49 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:14:49.365118 | orchestrator | 2025-09-27 22:14:49 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:14:49.365135 | orchestrator | 2025-09-27 22:14:49 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:14:49.365140 | orchestrator | 2025-09-27 22:14:49 | INFO  | Task 474f60a2-f16e-41af-8528-76df54ed700d is in state STARTED 2025-09-27 22:14:49.365145 | orchestrator | 2025-09-27 22:14:49 | INFO  | Task 05acd74c-fb80-48f5-8a73-07bb3b5278ae is in state STARTED 2025-09-27 22:14:49.365150 | orchestrator | 2025-09-27 22:14:49 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:14:52.353723 | orchestrator | 2025-09-27 22:14:52 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:14:52.353925 | orchestrator | 2025-09-27 22:14:52 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:14:52.356314 | orchestrator | 2025-09-27 22:14:52 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:14:52.356969 | orchestrator | 2025-09-27 22:14:52 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:14:52.357529 | orchestrator | 2025-09-27 22:14:52 | INFO  | Task 474f60a2-f16e-41af-8528-76df54ed700d is in state SUCCESS 2025-09-27 22:14:52.358284 | orchestrator | 2025-09-27 22:14:52 | INFO  | Task 05acd74c-fb80-48f5-8a73-07bb3b5278ae is in state STARTED 2025-09-27 22:14:52.358308 | orchestrator | 2025-09-27 22:14:52 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:14:55.392231 | orchestrator | 2025-09-27 22:14:55 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:14:55.392327 | orchestrator | 2025-09-27 22:14:55 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:14:55.392900 | orchestrator | 2025-09-27 22:14:55 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:14:55.393332 | orchestrator | 2025-09-27 22:14:55 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:14:55.393805 | orchestrator | 2025-09-27 22:14:55 | INFO  | Task 05acd74c-fb80-48f5-8a73-07bb3b5278ae is in state STARTED 2025-09-27 22:14:55.394135 | orchestrator | 2025-09-27 22:14:55 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:14:58.415841 | orchestrator | 2025-09-27 22:14:58 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:14:58.416384 | orchestrator | 2025-09-27 22:14:58 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:14:58.416899 | orchestrator | 2025-09-27 22:14:58 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:14:58.417783 | orchestrator | 2025-09-27 22:14:58 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:14:58.418172 | orchestrator | 2025-09-27 22:14:58 | INFO  | Task 05acd74c-fb80-48f5-8a73-07bb3b5278ae is in state STARTED 2025-09-27 22:14:58.418208 | orchestrator | 2025-09-27 22:14:58 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:15:01.451980 | orchestrator | 2025-09-27 22:15:01 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:15:01.452052 | orchestrator | 2025-09-27 22:15:01 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:15:01.452062 | orchestrator | 2025-09-27 22:15:01 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:15:01.452068 | orchestrator | 2025-09-27 22:15:01 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:15:01.452075 | orchestrator | 2025-09-27 22:15:01 | INFO  | Task 05acd74c-fb80-48f5-8a73-07bb3b5278ae is in state STARTED 2025-09-27 22:15:01.452081 | orchestrator | 2025-09-27 22:15:01 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:15:04.461884 | orchestrator | 2025-09-27 22:15:04 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:15:04.461994 | orchestrator | 2025-09-27 22:15:04 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:15:04.462359 | orchestrator | 2025-09-27 22:15:04 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:15:04.463336 | orchestrator | 2025-09-27 22:15:04 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:15:04.463877 | orchestrator | 2025-09-27 22:15:04 | INFO  | Task 05acd74c-fb80-48f5-8a73-07bb3b5278ae is in state STARTED 2025-09-27 22:15:04.463982 | orchestrator | 2025-09-27 22:15:04 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:15:07.491437 | orchestrator | 2025-09-27 22:15:07 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:15:07.491608 | orchestrator | 2025-09-27 22:15:07 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:15:07.492237 | orchestrator | 2025-09-27 22:15:07 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:15:07.492720 | orchestrator | 2025-09-27 22:15:07 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:15:07.493154 | orchestrator | 2025-09-27 22:15:07 | INFO  | Task 05acd74c-fb80-48f5-8a73-07bb3b5278ae is in state STARTED 2025-09-27 22:15:07.493234 | orchestrator | 2025-09-27 22:15:07 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:15:10.516446 | orchestrator | 2025-09-27 22:15:10 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:15:10.516677 | orchestrator | 2025-09-27 22:15:10 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:15:10.517434 | orchestrator | 2025-09-27 22:15:10 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:15:10.518290 | orchestrator | 2025-09-27 22:15:10 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:15:10.519075 | orchestrator | 2025-09-27 22:15:10 | INFO  | Task 05acd74c-fb80-48f5-8a73-07bb3b5278ae is in state STARTED 2025-09-27 22:15:10.519135 | orchestrator | 2025-09-27 22:15:10 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:15:13.543504 | orchestrator | 2025-09-27 22:15:13 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:15:13.544581 | orchestrator | 2025-09-27 22:15:13 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:15:13.545140 | orchestrator | 2025-09-27 22:15:13 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:15:13.545621 | orchestrator | 2025-09-27 22:15:13 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:15:13.546284 | orchestrator | 2025-09-27 22:15:13 | INFO  | Task 05acd74c-fb80-48f5-8a73-07bb3b5278ae is in state STARTED 2025-09-27 22:15:13.546341 | orchestrator | 2025-09-27 22:15:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:15:16.576249 | orchestrator | 2025-09-27 22:15:16 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:15:16.577127 | orchestrator | 2025-09-27 22:15:16 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:15:16.578538 | orchestrator | 2025-09-27 22:15:16 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:15:16.580032 | orchestrator | 2025-09-27 22:15:16 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:15:16.581372 | orchestrator | 2025-09-27 22:15:16 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:15:16.583003 | orchestrator | 2025-09-27 22:15:16 | INFO  | Task 05acd74c-fb80-48f5-8a73-07bb3b5278ae is in state SUCCESS 2025-09-27 22:15:16.584380 | orchestrator | 2025-09-27 22:15:16.584402 | orchestrator | 2025-09-27 22:15:16.584407 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:15:16.584413 | orchestrator | 2025-09-27 22:15:16.584418 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:15:16.584424 | orchestrator | Saturday 27 September 2025 22:14:18 +0000 (0:00:00.262) 0:00:00.262 **** 2025-09-27 22:15:16.584428 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:15:16.584454 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:15:16.584480 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:15:16.584486 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:15:16.584493 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:15:16.584499 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:15:16.584505 | orchestrator | ok: [testbed-manager] 2025-09-27 22:15:16.584512 | orchestrator | 2025-09-27 22:15:16.584518 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:15:16.584525 | orchestrator | Saturday 27 September 2025 22:14:18 +0000 (0:00:00.772) 0:00:01.034 **** 2025-09-27 22:15:16.584533 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-27 22:15:16.584539 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-27 22:15:16.584543 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-27 22:15:16.584548 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-27 22:15:16.584553 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-27 22:15:16.584557 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-27 22:15:16.584580 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-27 22:15:16.584585 | orchestrator | 2025-09-27 22:15:16.584589 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-27 22:15:16.584593 | orchestrator | 2025-09-27 22:15:16.584597 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-27 22:15:16.584601 | orchestrator | Saturday 27 September 2025 22:14:19 +0000 (0:00:00.732) 0:00:01.767 **** 2025-09-27 22:15:16.584617 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2025-09-27 22:15:16.584623 | orchestrator | 2025-09-27 22:15:16.584627 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-27 22:15:16.584630 | orchestrator | Saturday 27 September 2025 22:14:21 +0000 (0:00:01.981) 0:00:03.748 **** 2025-09-27 22:15:16.584634 | orchestrator | changed: [testbed-node-3] => (item=swift (object-store)) 2025-09-27 22:15:16.584638 | orchestrator | 2025-09-27 22:15:16.584642 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-27 22:15:16.584646 | orchestrator | Saturday 27 September 2025 22:14:25 +0000 (0:00:03.539) 0:00:07.288 **** 2025-09-27 22:15:16.584650 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-27 22:15:16.584656 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-27 22:15:16.584660 | orchestrator | 2025-09-27 22:15:16.584664 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-27 22:15:16.584668 | orchestrator | Saturday 27 September 2025 22:14:31 +0000 (0:00:06.267) 0:00:13.555 **** 2025-09-27 22:15:16.584671 | orchestrator | ok: [testbed-node-3] => (item=service) 2025-09-27 22:15:16.584676 | orchestrator | 2025-09-27 22:15:16.584680 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-27 22:15:16.584683 | orchestrator | Saturday 27 September 2025 22:14:34 +0000 (0:00:03.077) 0:00:16.633 **** 2025-09-27 22:15:16.584687 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-27 22:15:16.584691 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service) 2025-09-27 22:15:16.584695 | orchestrator | 2025-09-27 22:15:16.584698 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-27 22:15:16.584702 | orchestrator | Saturday 27 September 2025 22:14:38 +0000 (0:00:03.818) 0:00:20.451 **** 2025-09-27 22:15:16.584706 | orchestrator | ok: [testbed-node-3] => (item=admin) 2025-09-27 22:15:16.584710 | orchestrator | changed: [testbed-node-3] => (item=ResellerAdmin) 2025-09-27 22:15:16.584714 | orchestrator | 2025-09-27 22:15:16.584717 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-27 22:15:16.584727 | orchestrator | Saturday 27 September 2025 22:14:44 +0000 (0:00:06.282) 0:00:26.734 **** 2025-09-27 22:15:16.584731 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service -> admin) 2025-09-27 22:15:16.584734 | orchestrator | 2025-09-27 22:15:16.584738 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:15:16.584742 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:15:16.584746 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:15:16.584750 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:15:16.584753 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:15:16.584757 | orchestrator | testbed-node-3 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:15:16.584770 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:15:16.584774 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:15:16.584778 | orchestrator | 2025-09-27 22:15:16.584782 | orchestrator | 2025-09-27 22:15:16.584785 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:15:16.584789 | orchestrator | Saturday 27 September 2025 22:14:49 +0000 (0:00:04.465) 0:00:31.199 **** 2025-09-27 22:15:16.584793 | orchestrator | =============================================================================== 2025-09-27 22:15:16.584797 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.28s 2025-09-27 22:15:16.584800 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.27s 2025-09-27 22:15:16.584804 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.47s 2025-09-27 22:15:16.584808 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.82s 2025-09-27 22:15:16.584811 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.54s 2025-09-27 22:15:16.584815 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.08s 2025-09-27 22:15:16.584819 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.98s 2025-09-27 22:15:16.584823 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.77s 2025-09-27 22:15:16.584826 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.73s 2025-09-27 22:15:16.584830 | orchestrator | 2025-09-27 22:15:16.584834 | orchestrator | 2025-09-27 22:15:16.584837 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:15:16.584841 | orchestrator | 2025-09-27 22:15:16.584845 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:15:16.584851 | orchestrator | Saturday 27 September 2025 22:14:18 +0000 (0:00:00.259) 0:00:00.259 **** 2025-09-27 22:15:16.584855 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:15:16.584859 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:15:16.584863 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:15:16.584866 | orchestrator | 2025-09-27 22:15:16.584870 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:15:16.584874 | orchestrator | Saturday 27 September 2025 22:14:18 +0000 (0:00:00.320) 0:00:00.580 **** 2025-09-27 22:15:16.584878 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-27 22:15:16.584882 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-27 22:15:16.584885 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-27 22:15:16.584889 | orchestrator | 2025-09-27 22:15:16.584896 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-27 22:15:16.584900 | orchestrator | 2025-09-27 22:15:16.584904 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-27 22:15:16.584907 | orchestrator | Saturday 27 September 2025 22:14:18 +0000 (0:00:00.364) 0:00:00.944 **** 2025-09-27 22:15:16.584911 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:15:16.584915 | orchestrator | 2025-09-27 22:15:16.584947 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-27 22:15:16.584951 | orchestrator | Saturday 27 September 2025 22:14:19 +0000 (0:00:00.633) 0:00:01.578 **** 2025-09-27 22:15:16.584955 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-27 22:15:16.584958 | orchestrator | 2025-09-27 22:15:16.584962 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-27 22:15:16.584966 | orchestrator | Saturday 27 September 2025 22:14:22 +0000 (0:00:03.292) 0:00:04.871 **** 2025-09-27 22:15:16.584970 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-27 22:15:16.584974 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-27 22:15:16.584977 | orchestrator | 2025-09-27 22:15:16.584981 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-27 22:15:16.584985 | orchestrator | Saturday 27 September 2025 22:14:29 +0000 (0:00:06.728) 0:00:11.600 **** 2025-09-27 22:15:16.584989 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-27 22:15:16.584992 | orchestrator | 2025-09-27 22:15:16.584996 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-27 22:15:16.585000 | orchestrator | Saturday 27 September 2025 22:14:32 +0000 (0:00:03.062) 0:00:14.662 **** 2025-09-27 22:15:16.585004 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-27 22:15:16.585007 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-27 22:15:16.585011 | orchestrator | 2025-09-27 22:15:16.585015 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-27 22:15:16.585019 | orchestrator | Saturday 27 September 2025 22:14:36 +0000 (0:00:04.219) 0:00:18.882 **** 2025-09-27 22:15:16.585023 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-27 22:15:16.585026 | orchestrator | 2025-09-27 22:15:16.585030 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-27 22:15:16.585034 | orchestrator | Saturday 27 September 2025 22:14:39 +0000 (0:00:02.962) 0:00:21.844 **** 2025-09-27 22:15:16.585038 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-27 22:15:16.585041 | orchestrator | 2025-09-27 22:15:16.585045 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-27 22:15:16.585049 | orchestrator | Saturday 27 September 2025 22:14:44 +0000 (0:00:04.509) 0:00:26.354 **** 2025-09-27 22:15:16.585065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-27 22:15:16.585076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-27 22:15:16.585085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-27 22:15:16.585093 | orchestrator | 2025-09-27 22:15:16.585096 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-27 22:15:16.585100 | orchestrator | Saturday 27 September 2025 22:14:47 +0000 (0:00:03.170) 0:00:29.524 **** 2025-09-27 22:15:16.585104 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:15:16.585108 | orchestrator | 2025-09-27 22:15:16.585112 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-27 22:15:16.585115 | orchestrator | Saturday 27 September 2025 22:14:47 +0000 (0:00:00.579) 0:00:30.103 **** 2025-09-27 22:15:16.585119 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:15:16.585123 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:15:16.585129 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:15:16.585133 | orchestrator | 2025-09-27 22:15:16.585137 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-27 22:15:16.585140 | orchestrator | Saturday 27 September 2025 22:14:51 +0000 (0:00:03.609) 0:00:33.712 **** 2025-09-27 22:15:16.585144 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-27 22:15:16.585148 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-27 22:15:16.585152 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-27 22:15:16.585156 | orchestrator | 2025-09-27 22:15:16.585160 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-27 22:15:16.585163 | orchestrator | Saturday 27 September 2025 22:14:52 +0000 (0:00:01.372) 0:00:35.085 **** 2025-09-27 22:15:16.585167 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-27 22:15:16.585171 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-27 22:15:16.585174 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-27 22:15:16.585178 | orchestrator | 2025-09-27 22:15:16.585182 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-27 22:15:16.585186 | orchestrator | Saturday 27 September 2025 22:14:53 +0000 (0:00:01.007) 0:00:36.092 **** 2025-09-27 22:15:16.585189 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:15:16.585193 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:15:16.585197 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:15:16.585200 | orchestrator | 2025-09-27 22:15:16.585204 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-27 22:15:16.585208 | orchestrator | Saturday 27 September 2025 22:14:54 +0000 (0:00:00.633) 0:00:36.725 **** 2025-09-27 22:15:16.585212 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:15:16.585215 | orchestrator | 2025-09-27 22:15:16.585219 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-27 22:15:16.585223 | orchestrator | Saturday 27 September 2025 22:14:54 +0000 (0:00:00.217) 0:00:36.943 **** 2025-09-27 22:15:16.585226 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:15:16.585230 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:15:16.585234 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:15:16.585237 | orchestrator | 2025-09-27 22:15:16.585241 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-27 22:15:16.585245 | orchestrator | Saturday 27 September 2025 22:14:55 +0000 (0:00:00.275) 0:00:37.219 **** 2025-09-27 22:15:16.585249 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:15:16.585252 | orchestrator | 2025-09-27 22:15:16.585256 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-27 22:15:16.585260 | orchestrator | Saturday 27 September 2025 22:14:55 +0000 (0:00:00.517) 0:00:37.736 **** 2025-09-27 22:15:16.585271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-27 22:15:16.585279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-27 22:15:16.585287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-27 22:15:16.585294 | orchestrator | 2025-09-27 22:15:16.585298 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-27 22:15:16.585302 | orchestrator | Saturday 27 September 2025 22:14:59 +0000 (0:00:03.872) 0:00:41.609 **** 2025-09-27 22:15:16.585309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-27 22:15:16.585313 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:15:16.585320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-27 22:15:16.585328 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:15:16.585338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-27 22:15:16.585343 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:15:16.585346 | orchestrator | 2025-09-27 22:15:16.585350 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-27 22:15:16.585354 | orchestrator | Saturday 27 September 2025 22:15:02 +0000 (0:00:03.429) 0:00:45.039 **** 2025-09-27 22:15:16.585358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-27 22:15:16.585365 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:15:16.585374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-27 22:15:16.585382 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:15:16.585386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-27 22:15:16.585393 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:15:16.585397 | orchestrator | 2025-09-27 22:15:16.585401 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-27 22:15:16.585404 | orchestrator | Saturday 27 September 2025 22:15:05 +0000 (0:00:03.099) 0:00:48.139 **** 2025-09-27 22:15:16.585408 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:15:16.585412 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:15:16.585416 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:15:16.585419 | orchestrator | 2025-09-27 22:15:16.585423 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-27 22:15:16.585427 | orchestrator | Saturday 27 September 2025 22:15:10 +0000 (0:00:04.164) 0:00:52.303 **** 2025-09-27 22:15:16.585431 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: 'glance_backend_swift' is undefined 2025-09-27 22:15:16.585537 | orchestrator | failed: [testbed-node-1] (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "glance-api", "value": {"container_name": "glance_api", "dimensions": {}, "enabled": true, "environment": {"http_proxy": "", "https_proxy": "", "no_proxy": "localhost,127.0.0.1,192.168.16.11,192.168.16.9"}, "group": "glance-api", "haproxy": {"glance_api": {"backend_http_extra": ["timeout server 6h"], "custom_member_list": ["server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5", "server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5", "server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5", ""], "enabled": true, "external": false, "frontend_http_extra": ["timeout client 6h"], "mode": "http", "port": "9292"}, "glance_api_external": {"backend_http_extra": ["timeout server 6h"], "custom_member_list": ["server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5", "server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5", "server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5", ""], "enabled": true, "external": true, "external_fqdn": "api.testbed.osism.xyz", "frontend_http_extra": ["timeout client 6h"], "mode": "http", "port": "9292"}}, "healthcheck": {"interval": "30", "retries": "3", "start_period": "5", "test": ["CMD-SHELL", "healthcheck_curl http://192.168.16.11:9292"], "timeout": "30"}, "host_in_groups": true, "image": "registry.osism.tech/kolla/glance-api:2024.2", "privileged": true, "volumes": ["/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "glance:/var/lib/glance/", "", "kolla_logs:/var/log/kolla/", "", "iscsi_info:/etc/iscsi", "/dev:/dev"]}}, "msg": "AnsibleUndefinedVariable: 'glance_backend_swift' is undefined"} 2025-09-27 22:15:16.585555 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: 'glance_backend_swift' is undefined 2025-09-27 22:15:16.585571 | orchestrator | failed: [testbed-node-0] (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "glance-api", "value": {"container_name": "glance_api", "dimensions": {}, "enabled": true, "environment": {"http_proxy": "", "https_proxy": "", "no_proxy": "localhost,127.0.0.1,192.168.16.10,192.168.16.9"}, "group": "glance-api", "haproxy": {"glance_api": {"backend_http_extra": ["timeout server 6h"], "custom_member_list": ["server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5", "server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5", "server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5", ""], "enabled": true, "external": false, "frontend_http_extra": ["timeout client 6h"], "mode": "http", "port": "9292"}, "glance_api_external": {"backend_http_extra": ["timeout server 6h"], "custom_member_list": ["server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5", "server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5", "server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5", ""], "enabled": true, "external": true, "external_fqdn": "api.testbed.osism.xyz", "frontend_http_extra": ["timeout client 6h"], "mode": "http", "port": "9292"}}, "healthcheck": {"interval": "30", "retries": "3", "start_period": "5", "test": ["CMD-SHELL", "healthcheck_curl http://192.168.16.10:9292"], "timeout": "30"}, "host_in_groups": true, "image": "registry.osism.tech/kolla/glance-api:2024.2", "privileged": true, "volumes": ["/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "glance:/var/lib/glance/", "", "kolla_logs:/var/log/kolla/", "", "iscsi_info:/etc/iscsi", "/dev:/dev"]}}, "msg": "AnsibleUndefinedVariable: 'glance_backend_swift' is undefined"} 2025-09-27 22:15:16.585579 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: 'glance_backend_swift' is undefined 2025-09-27 22:15:16.585590 | orchestrator | failed: [testbed-node-2] (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "glance-api", "value": {"container_name": "glance_api", "dimensions": {}, "enabled": true, "environment": {"http_proxy": "", "https_proxy": "", "no_proxy": "localhost,127.0.0.1,192.168.16.12,192.168.16.9"}, "group": "glance-api", "haproxy": {"glance_api": {"backend_http_extra": ["timeout server 6h"], "custom_member_list": ["server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5", "server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5", "server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5", ""], "enabled": true, "external": false, "frontend_http_extra": ["timeout client 6h"], "mode": "http", "port": "9292"}, "glance_api_external": {"backend_http_extra": ["timeout server 6h"], "custom_member_list": ["server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5", "server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5", "server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5", ""], "enabled": true, "external": true, "external_fqdn": "api.testbed.osism.xyz", "frontend_http_extra": ["timeout client 6h"], "mode": "http", "port": "9292"}}, "healthcheck": {"interval": "30", "retries": "3", "start_period": "5", "test": ["CMD-SHELL", "healthcheck_curl http://192.168.16.12:9292"], "timeout": "30"}, "host_in_groups": true, "image": "registry.osism.tech/kolla/glance-api:2024.2", "privileged": true, "volumes": ["/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "glance:/var/lib/glance/", "", "kolla_logs:/var/log/kolla/", "", "iscsi_info:/etc/iscsi", "/dev:/dev"]}}, "msg": "AnsibleUndefinedVariable: 'glance_backend_swift' is undefined"} 2025-09-27 22:15:16.585595 | orchestrator | 2025-09-27 22:15:16.585598 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:15:16.585602 | orchestrator | testbed-node-0 : ok=17  changed=10  unreachable=0 failed=1  skipped=5  rescued=0 ignored=0 2025-09-27 22:15:16.585613 | orchestrator | testbed-node-1 : ok=11  changed=5  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2025-09-27 22:15:16.585617 | orchestrator | testbed-node-2 : ok=11  changed=5  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2025-09-27 22:15:16.585621 | orchestrator | 2025-09-27 22:15:16.585624 | orchestrator | 2025-09-27 22:15:16.585628 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:15:16.585632 | orchestrator | Saturday 27 September 2025 22:15:14 +0000 (0:00:04.127) 0:00:56.430 **** 2025-09-27 22:15:16.585635 | orchestrator | =============================================================================== 2025-09-27 22:15:16.585639 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.73s 2025-09-27 22:15:16.585643 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.51s 2025-09-27 22:15:16.585647 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.22s 2025-09-27 22:15:16.585650 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.16s 2025-09-27 22:15:16.585654 | orchestrator | glance : Copying over config.json files for services -------------------- 4.13s 2025-09-27 22:15:16.585658 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.87s 2025-09-27 22:15:16.585661 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.61s 2025-09-27 22:15:16.585665 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.43s 2025-09-27 22:15:16.585669 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.29s 2025-09-27 22:15:16.585673 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.17s 2025-09-27 22:15:16.585676 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.10s 2025-09-27 22:15:16.585682 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.06s 2025-09-27 22:15:16.585686 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 2.96s 2025-09-27 22:15:16.585690 | orchestrator | glance : Copy over multiple ceph configs for Glance --------------------- 1.37s 2025-09-27 22:15:16.585693 | orchestrator | glance : Copy over ceph Glance keyrings --------------------------------- 1.01s 2025-09-27 22:15:16.585697 | orchestrator | glance : include_tasks -------------------------------------------------- 0.63s 2025-09-27 22:15:16.585701 | orchestrator | glance : Ensuring config directory has correct owner and permission ----- 0.63s 2025-09-27 22:15:16.585704 | orchestrator | glance : include_tasks -------------------------------------------------- 0.58s 2025-09-27 22:15:16.585708 | orchestrator | glance : include_tasks -------------------------------------------------- 0.52s 2025-09-27 22:15:16.585712 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.36s 2025-09-27 22:15:16.585715 | orchestrator | 2025-09-27 22:15:16 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:15:19.617994 | orchestrator | 2025-09-27 22:15:19 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:15:19.618790 | orchestrator | 2025-09-27 22:15:19 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:15:19.619580 | orchestrator | 2025-09-27 22:15:19 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:15:19.621073 | orchestrator | 2025-09-27 22:15:19 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:15:19.621906 | orchestrator | 2025-09-27 22:15:19 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:15:19.621987 | orchestrator | 2025-09-27 22:15:19 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:15:22.646912 | orchestrator | 2025-09-27 22:15:22 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:15:22.647076 | orchestrator | 2025-09-27 22:15:22 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:15:22.647399 | orchestrator | 2025-09-27 22:15:22 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:15:22.648890 | orchestrator | 2025-09-27 22:15:22 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:15:22.649488 | orchestrator | 2025-09-27 22:15:22 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:15:22.649512 | orchestrator | 2025-09-27 22:15:22 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:15:25.675050 | orchestrator | 2025-09-27 22:15:25 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:15:25.677256 | orchestrator | 2025-09-27 22:15:25 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:15:25.678137 | orchestrator | 2025-09-27 22:15:25 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:15:25.678950 | orchestrator | 2025-09-27 22:15:25 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:15:25.680036 | orchestrator | 2025-09-27 22:15:25 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:15:25.680092 | orchestrator | 2025-09-27 22:15:25 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:15:28.735815 | orchestrator | 2025-09-27 22:15:28 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:15:28.736520 | orchestrator | 2025-09-27 22:15:28 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:15:28.738123 | orchestrator | 2025-09-27 22:15:28 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:15:28.739254 | orchestrator | 2025-09-27 22:15:28 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:15:28.741209 | orchestrator | 2025-09-27 22:15:28 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:15:28.741262 | orchestrator | 2025-09-27 22:15:28 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:15:31.782268 | orchestrator | 2025-09-27 22:15:31 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:15:31.782446 | orchestrator | 2025-09-27 22:15:31 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state STARTED 2025-09-27 22:15:31.783253 | orchestrator | 2025-09-27 22:15:31 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:15:31.784019 | orchestrator | 2025-09-27 22:15:31 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:15:31.785012 | orchestrator | 2025-09-27 22:15:31 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:15:31.785102 | orchestrator | 2025-09-27 22:15:31 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:15:34.816168 | orchestrator | 2025-09-27 22:15:34 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:15:34.816325 | orchestrator | 2025-09-27 22:15:34 | INFO  | Task cb231486-aed6-4aec-a490-bcdbbb33a315 is in state SUCCESS 2025-09-27 22:15:34.816442 | orchestrator | 2025-09-27 22:15:34 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:15:34.816991 | orchestrator | 2025-09-27 22:15:34 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:15:34.817512 | orchestrator | 2025-09-27 22:15:34 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:15:34.817644 | orchestrator | 2025-09-27 22:15:34 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:15:37.845221 | orchestrator | 2025-09-27 22:15:37 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:15:37.845821 | orchestrator | 2025-09-27 22:15:37 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:15:37.846805 | orchestrator | 2025-09-27 22:15:37 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:15:37.847371 | orchestrator | 2025-09-27 22:15:37 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:15:37.847452 | orchestrator | 2025-09-27 22:15:37 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:15:40.878370 | orchestrator | 2025-09-27 22:15:40 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:15:40.878456 | orchestrator | 2025-09-27 22:15:40 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:15:40.879105 | orchestrator | 2025-09-27 22:15:40 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:15:40.879594 | orchestrator | 2025-09-27 22:15:40 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:15:40.879632 | orchestrator | 2025-09-27 22:15:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:15:43.921176 | orchestrator | 2025-09-27 22:15:43 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:15:43.921263 | orchestrator | 2025-09-27 22:15:43 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:15:43.921284 | orchestrator | 2025-09-27 22:15:43 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:15:43.921301 | orchestrator | 2025-09-27 22:15:43 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:15:43.921312 | orchestrator | 2025-09-27 22:15:43 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:15:46.954148 | orchestrator | 2025-09-27 22:15:46 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:15:46.955377 | orchestrator | 2025-09-27 22:15:46 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:15:46.958636 | orchestrator | 2025-09-27 22:15:46 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:15:46.959753 | orchestrator | 2025-09-27 22:15:46 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:15:46.960169 | orchestrator | 2025-09-27 22:15:46 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:15:49.991954 | orchestrator | 2025-09-27 22:15:49 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:15:49.994291 | orchestrator | 2025-09-27 22:15:49 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:15:49.995769 | orchestrator | 2025-09-27 22:15:49 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:15:49.997244 | orchestrator | 2025-09-27 22:15:49 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:15:49.997283 | orchestrator | 2025-09-27 22:15:49 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:15:53.025065 | orchestrator | 2025-09-27 22:15:53 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:15:53.025193 | orchestrator | 2025-09-27 22:15:53 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:15:53.026161 | orchestrator | 2025-09-27 22:15:53 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:15:53.026830 | orchestrator | 2025-09-27 22:15:53 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:15:53.026860 | orchestrator | 2025-09-27 22:15:53 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:15:56.069211 | orchestrator | 2025-09-27 22:15:56 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:15:56.071195 | orchestrator | 2025-09-27 22:15:56 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:15:56.072972 | orchestrator | 2025-09-27 22:15:56 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:15:56.074104 | orchestrator | 2025-09-27 22:15:56 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:15:56.074136 | orchestrator | 2025-09-27 22:15:56 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:15:59.112456 | orchestrator | 2025-09-27 22:15:59 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:15:59.113599 | orchestrator | 2025-09-27 22:15:59 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:15:59.114203 | orchestrator | 2025-09-27 22:15:59 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:15:59.115066 | orchestrator | 2025-09-27 22:15:59 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:15:59.115097 | orchestrator | 2025-09-27 22:15:59 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:16:02.155166 | orchestrator | 2025-09-27 22:16:02 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:16:02.156332 | orchestrator | 2025-09-27 22:16:02 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:16:02.158339 | orchestrator | 2025-09-27 22:16:02 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:16:02.160517 | orchestrator | 2025-09-27 22:16:02 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:16:02.160586 | orchestrator | 2025-09-27 22:16:02 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:16:05.278702 | orchestrator | 2025-09-27 22:16:05 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:16:05.281374 | orchestrator | 2025-09-27 22:16:05 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:16:05.284459 | orchestrator | 2025-09-27 22:16:05 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:16:05.289228 | orchestrator | 2025-09-27 22:16:05 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:16:05.289295 | orchestrator | 2025-09-27 22:16:05 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:16:08.309221 | orchestrator | 2025-09-27 22:16:08 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:16:08.309437 | orchestrator | 2025-09-27 22:16:08 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:16:08.309793 | orchestrator | 2025-09-27 22:16:08 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:16:08.310406 | orchestrator | 2025-09-27 22:16:08 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:16:08.310432 | orchestrator | 2025-09-27 22:16:08 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:16:11.334703 | orchestrator | 2025-09-27 22:16:11 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:16:11.334847 | orchestrator | 2025-09-27 22:16:11 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:16:11.335200 | orchestrator | 2025-09-27 22:16:11 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:16:11.335550 | orchestrator | 2025-09-27 22:16:11 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:16:11.335566 | orchestrator | 2025-09-27 22:16:11 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:16:14.369431 | orchestrator | 2025-09-27 22:16:14 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:16:14.369555 | orchestrator | 2025-09-27 22:16:14 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:16:14.369577 | orchestrator | 2025-09-27 22:16:14 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:16:14.369592 | orchestrator | 2025-09-27 22:16:14 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:16:14.369607 | orchestrator | 2025-09-27 22:16:14 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:16:17.391934 | orchestrator | 2025-09-27 22:16:17 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:16:17.393209 | orchestrator | 2025-09-27 22:16:17 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:16:17.394459 | orchestrator | 2025-09-27 22:16:17 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:16:17.395633 | orchestrator | 2025-09-27 22:16:17 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:16:17.395684 | orchestrator | 2025-09-27 22:16:17 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:16:20.424236 | orchestrator | 2025-09-27 22:16:20 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:16:20.424590 | orchestrator | 2025-09-27 22:16:20 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:16:20.425306 | orchestrator | 2025-09-27 22:16:20 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:16:20.426209 | orchestrator | 2025-09-27 22:16:20 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:16:20.426244 | orchestrator | 2025-09-27 22:16:20 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:16:23.460290 | orchestrator | 2025-09-27 22:16:23 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:16:23.460381 | orchestrator | 2025-09-27 22:16:23 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state STARTED 2025-09-27 22:16:23.460458 | orchestrator | 2025-09-27 22:16:23 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:16:23.461405 | orchestrator | 2025-09-27 22:16:23 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:16:23.461445 | orchestrator | 2025-09-27 22:16:23 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:16:26.495262 | orchestrator | 2025-09-27 22:16:26 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:16:26.501099 | orchestrator | 2025-09-27 22:16:26 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:16:26.505777 | orchestrator | 2025-09-27 22:16:26 | INFO  | Task 72c256d1-b081-4282-980f-128e6a3af968 is in state SUCCESS 2025-09-27 22:16:26.507403 | orchestrator | 2025-09-27 22:16:26.507462 | orchestrator | 2025-09-27 22:16:26.507475 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-27 22:16:26.507486 | orchestrator | 2025-09-27 22:16:26.507495 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-27 22:16:26.507505 | orchestrator | Saturday 27 September 2025 22:14:12 +0000 (0:00:00.243) 0:00:00.243 **** 2025-09-27 22:16:26.507514 | orchestrator | changed: [testbed-manager] 2025-09-27 22:16:26.507524 | orchestrator | 2025-09-27 22:16:26.507533 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-27 22:16:26.507543 | orchestrator | Saturday 27 September 2025 22:14:13 +0000 (0:00:01.781) 0:00:02.024 **** 2025-09-27 22:16:26.507552 | orchestrator | changed: [testbed-manager] 2025-09-27 22:16:26.507562 | orchestrator | 2025-09-27 22:16:26.507568 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-27 22:16:26.507574 | orchestrator | Saturday 27 September 2025 22:14:14 +0000 (0:00:00.815) 0:00:02.840 **** 2025-09-27 22:16:26.507579 | orchestrator | changed: [testbed-manager] 2025-09-27 22:16:26.507586 | orchestrator | 2025-09-27 22:16:26.507591 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-27 22:16:26.507597 | orchestrator | Saturday 27 September 2025 22:14:15 +0000 (0:00:00.926) 0:00:03.767 **** 2025-09-27 22:16:26.507602 | orchestrator | changed: [testbed-manager] 2025-09-27 22:16:26.507608 | orchestrator | 2025-09-27 22:16:26.507613 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-27 22:16:26.507618 | orchestrator | Saturday 27 September 2025 22:14:16 +0000 (0:00:01.195) 0:00:04.962 **** 2025-09-27 22:16:26.507624 | orchestrator | changed: [testbed-manager] 2025-09-27 22:16:26.507629 | orchestrator | 2025-09-27 22:16:26.507634 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-27 22:16:26.507640 | orchestrator | Saturday 27 September 2025 22:14:17 +0000 (0:00:00.933) 0:00:05.896 **** 2025-09-27 22:16:26.507645 | orchestrator | changed: [testbed-manager] 2025-09-27 22:16:26.507650 | orchestrator | 2025-09-27 22:16:26.507655 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-27 22:16:26.507661 | orchestrator | Saturday 27 September 2025 22:14:18 +0000 (0:00:00.977) 0:00:06.873 **** 2025-09-27 22:16:26.507666 | orchestrator | changed: [testbed-manager] 2025-09-27 22:16:26.507671 | orchestrator | 2025-09-27 22:16:26.507677 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-27 22:16:26.507682 | orchestrator | Saturday 27 September 2025 22:14:19 +0000 (0:00:01.093) 0:00:07.967 **** 2025-09-27 22:16:26.507687 | orchestrator | changed: [testbed-manager] 2025-09-27 22:16:26.507693 | orchestrator | 2025-09-27 22:16:26.507698 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-27 22:16:26.507704 | orchestrator | Saturday 27 September 2025 22:14:20 +0000 (0:00:01.057) 0:00:09.025 **** 2025-09-27 22:16:26.507709 | orchestrator | changed: [testbed-manager] 2025-09-27 22:16:26.507714 | orchestrator | 2025-09-27 22:16:26.507720 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-27 22:16:26.507725 | orchestrator | Saturday 27 September 2025 22:15:08 +0000 (0:00:47.599) 0:00:56.624 **** 2025-09-27 22:16:26.507730 | orchestrator | skipping: [testbed-manager] 2025-09-27 22:16:26.507736 | orchestrator | 2025-09-27 22:16:26.507741 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-27 22:16:26.507746 | orchestrator | 2025-09-27 22:16:26.507752 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-27 22:16:26.507757 | orchestrator | Saturday 27 September 2025 22:15:08 +0000 (0:00:00.134) 0:00:56.759 **** 2025-09-27 22:16:26.507762 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:16:26.507768 | orchestrator | 2025-09-27 22:16:26.507773 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-27 22:16:26.507778 | orchestrator | 2025-09-27 22:16:26.507784 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-27 22:16:26.507873 | orchestrator | Saturday 27 September 2025 22:15:09 +0000 (0:00:01.402) 0:00:58.162 **** 2025-09-27 22:16:26.507984 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:16:26.507993 | orchestrator | 2025-09-27 22:16:26.507999 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-27 22:16:26.508005 | orchestrator | 2025-09-27 22:16:26.508011 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-27 22:16:26.508034 | orchestrator | Saturday 27 September 2025 22:15:21 +0000 (0:00:11.327) 0:01:09.489 **** 2025-09-27 22:16:26.508040 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:16:26.508046 | orchestrator | 2025-09-27 22:16:26.508052 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:16:26.508060 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 22:16:26.508080 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:16:26.508087 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:16:26.508093 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:16:26.508099 | orchestrator | 2025-09-27 22:16:26.508106 | orchestrator | 2025-09-27 22:16:26.508111 | orchestrator | 2025-09-27 22:16:26.508117 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:16:26.508124 | orchestrator | Saturday 27 September 2025 22:15:32 +0000 (0:00:11.110) 0:01:20.600 **** 2025-09-27 22:16:26.508130 | orchestrator | =============================================================================== 2025-09-27 22:16:26.508136 | orchestrator | Create admin user ------------------------------------------------------ 47.60s 2025-09-27 22:16:26.508142 | orchestrator | Restart ceph manager service ------------------------------------------- 23.84s 2025-09-27 22:16:26.508162 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.78s 2025-09-27 22:16:26.508169 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.20s 2025-09-27 22:16:26.508175 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.09s 2025-09-27 22:16:26.508181 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.06s 2025-09-27 22:16:26.508187 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.98s 2025-09-27 22:16:26.508193 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.93s 2025-09-27 22:16:26.508199 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.93s 2025-09-27 22:16:26.508206 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.82s 2025-09-27 22:16:26.508212 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.13s 2025-09-27 22:16:26.508218 | orchestrator | 2025-09-27 22:16:26.508224 | orchestrator | 2025-09-27 22:16:26.508231 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:16:26.508256 | orchestrator | 2025-09-27 22:16:26.508262 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:16:26.508268 | orchestrator | Saturday 27 September 2025 22:14:12 +0000 (0:00:00.258) 0:00:00.258 **** 2025-09-27 22:16:26.508273 | orchestrator | ok: [testbed-manager] 2025-09-27 22:16:26.508294 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:16:26.508300 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:16:26.508306 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:16:26.508311 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:16:26.508316 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:16:26.508322 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:16:26.508327 | orchestrator | 2025-09-27 22:16:26.508332 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:16:26.508344 | orchestrator | Saturday 27 September 2025 22:14:12 +0000 (0:00:00.776) 0:00:01.034 **** 2025-09-27 22:16:26.508350 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-27 22:16:26.508355 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-27 22:16:26.508381 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-27 22:16:26.508386 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-27 22:16:26.508392 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-27 22:16:26.508397 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-27 22:16:26.508402 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-27 22:16:26.508408 | orchestrator | 2025-09-27 22:16:26.508413 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-27 22:16:26.508419 | orchestrator | 2025-09-27 22:16:26.508424 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-27 22:16:26.508429 | orchestrator | Saturday 27 September 2025 22:14:13 +0000 (0:00:00.639) 0:00:01.674 **** 2025-09-27 22:16:26.508435 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:16:26.508442 | orchestrator | 2025-09-27 22:16:26.508448 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-27 22:16:26.508453 | orchestrator | Saturday 27 September 2025 22:14:14 +0000 (0:00:01.388) 0:00:03.063 **** 2025-09-27 22:16:26.508461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.508474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.508480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.508571 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-27 22:16:26.508579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.508609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.508615 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.508621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.508627 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.508636 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.508664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.508692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.508704 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.508710 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.508716 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.508722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.508727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.508763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.508774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.508781 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-27 22:16:26.508792 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.508799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.508805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.508826 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.508839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.508858 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.508947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.508959 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.508967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.508986 | orchestrator | 2025-09-27 22:16:26.508996 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-27 22:16:26.509005 | orchestrator | Saturday 27 September 2025 22:14:17 +0000 (0:00:03.020) 0:00:06.084 **** 2025-09-27 22:16:26.509015 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:16:26.509025 | orchestrator | 2025-09-27 22:16:26.509032 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-27 22:16:26.509037 | orchestrator | Saturday 27 September 2025 22:14:19 +0000 (0:00:01.362) 0:00:07.446 **** 2025-09-27 22:16:26.509043 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-27 22:16:26.509054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.509060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.509078 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.509102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.509108 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.509134 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.509140 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.509146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.509161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.509167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.509244 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.509251 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.509257 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.509271 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.509277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.509283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.509305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.510188 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-27 22:16:26.510247 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.510254 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.510258 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.510263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.510267 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.510283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.510300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.510312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.510317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.510321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.510325 | orchestrator | 2025-09-27 22:16:26.510330 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-27 22:16:26.510334 | orchestrator | Saturday 27 September 2025 22:14:24 +0000 (0:00:05.404) 0:00:12.851 **** 2025-09-27 22:16:26.510339 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-27 22:16:26.510344 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 22:16:26.510355 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 22:16:26.510363 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-27 22:16:26.510368 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:16:26.510372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 22:16:26.510377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:16:26.510381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:16:26.510385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 22:16:26.510395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:16:26.510399 | orchestrator | skipping: [testbed-manager] 2025-09-27 22:16:26.510404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 22:16:26.510412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:16:26.510416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:16:26.510420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 22:16:26.510424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:16:26.510428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 22:16:26.510435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:16:26.510441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:16:26.510445 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:16:26.510449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 22:16:26.510453 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:16:26.510460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:16:26.510464 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:16:26.510468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 22:16:26.510473 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 22:16:26.510477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 22:16:26.510481 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:16:26.510488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 22:16:26.510492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 22:16:26.510500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 22:16:26.510504 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:16:26.510511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 22:16:26.510515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 22:16:26.510519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 22:16:26.510523 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:16:26.510527 | orchestrator | 2025-09-27 22:16:26.510531 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-27 22:16:26.510535 | orchestrator | Saturday 27 September 2025 22:14:25 +0000 (0:00:01.399) 0:00:14.251 **** 2025-09-27 22:16:26.510539 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-27 22:16:26.510547 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 22:16:26.510551 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 22:16:26.510560 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-27 22:16:26.510565 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:16:26.510569 | orchestrator | skipping: [testbed-manager] 2025-09-27 22:16:26.510573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 22:16:26.510577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:16:26.510584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:16:26.510588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 22:16:26.510592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:16:26.510599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 22:16:26.510606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:16:26.510610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:16:26.510614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 22:16:26.510619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:16:26.510630 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:16:26.510634 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:16:26.510638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 22:16:26.510642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 22:16:26.510648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 22:16:26.510653 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:16:26.510657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 22:16:26.510664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:16:26.510668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:16:26.510672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 22:16:26.510680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 22:16:26.510684 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:16:26.510688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 22:16:26.510692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 22:16:26.510699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 22:16:26.510703 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:16:26.510707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 22:16:26.510714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 22:16:26.510718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 22:16:26.510726 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:16:26.510730 | orchestrator | 2025-09-27 22:16:26.510734 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-27 22:16:26.510738 | orchestrator | Saturday 27 September 2025 22:14:27 +0000 (0:00:01.891) 0:00:16.142 **** 2025-09-27 22:16:26.510742 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-27 22:16:26.510747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.510751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.510757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.510762 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.510832 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.510839 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.510851 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.510855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.510860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.510864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.510871 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.510899 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.510909 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.510918 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.510922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.510926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.510930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.510934 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.510941 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-27 22:16:26.510950 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.510958 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.510962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.510966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.510970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.510974 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.510978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.511033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.511050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.511054 | orchestrator | 2025-09-27 22:16:26.511058 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-27 22:16:26.511063 | orchestrator | Saturday 27 September 2025 22:14:33 +0000 (0:00:05.292) 0:00:21.434 **** 2025-09-27 22:16:26.511067 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-27 22:16:26.511071 | orchestrator | 2025-09-27 22:16:26.511075 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-27 22:16:26.511079 | orchestrator | Saturday 27 September 2025 22:14:34 +0000 (0:00:00.907) 0:00:22.341 **** 2025-09-27 22:16:26.511083 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1083538, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6190155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511089 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1083538, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6190155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511093 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1083538, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6190155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511098 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1083538, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6190155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511104 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1083538, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6190155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 22:16:26.511117 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1083577, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6246498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511122 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1083577, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6246498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511127 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1083538, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6190155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511131 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1083577, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6246498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511135 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1083538, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6190155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511139 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1083577, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6246498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511146 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1083526, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6186788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511156 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1083526, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6186788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511161 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1083577, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6246498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511165 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1083577, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6246498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511169 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1083562, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6220422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511173 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1083526, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6186788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511177 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1083526, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6186788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511184 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1083577, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6246498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 22:16:26.511317 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1083562, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6220422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511326 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1083526, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6186788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511332 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1083520, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6172354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511339 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1083526, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6186788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511345 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1083562, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6220422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511352 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1083520, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6172354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511368 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1083562, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6220422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511378 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1083541, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6194444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511385 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1083562, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6220422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511390 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1083541, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6194444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511397 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1083562, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6220422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511403 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1083520, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6172354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511410 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1083553, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6208737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511421 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1083520, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6172354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511428 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1083526, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6186788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 22:16:26.511432 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1083520, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6172354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511436 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1083520, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6172354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511440 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1083541, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6194444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511444 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1083553, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6208737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511448 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1083541, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6194444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511460 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1083544, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6199784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511464 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1083553, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6208737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511471 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1083544, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6199784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511475 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1083541, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6194444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511479 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1083541, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6194444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511484 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1083553, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6208737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511488 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1083536, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6186788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511500 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1083544, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6199784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511504 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083574, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6237862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511510 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1083536, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6186788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511515 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1083536, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6186788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511519 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1083544, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6199784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511523 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1083553, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6208737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511530 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1083553, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6208737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511537 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083517, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.616724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511541 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1083536, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6186788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511548 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083574, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6237862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511552 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083574, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6237862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511556 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083574, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6237862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511560 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083517, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.616724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511569 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1083544, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6199784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511577 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083517, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.616724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511581 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083613, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.628084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511587 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1083536, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6186788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511591 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1083544, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6199784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511595 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1083569, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6226666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511600 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083613, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.628084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511608 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083574, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6237862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511615 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083517, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.616724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511619 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1083562, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6220422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 22:16:26.511626 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1083569, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6226666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511630 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083524, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6174502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511634 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083517, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.616724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511642 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083613, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.628084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511646 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083524, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6174502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511653 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1083519, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.616949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511657 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083613, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.628084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511666 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1083536, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6186788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511670 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083613, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.628084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511674 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1083569, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6226666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511681 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1083569, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6226666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511685 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1083519, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.616949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511692 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083574, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6237862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511696 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1083551, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6203954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511702 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1083551, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6203954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511706 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1083520, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6172354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 22:16:26.511710 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083524, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6174502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511718 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083517, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.616724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511722 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1083548, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6199784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511729 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1083569, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6226666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511734 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083524, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6174502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511742 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1083610, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.627692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511746 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1083548, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6199784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511754 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:16:26.511758 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083524, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6174502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511762 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083613, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.628084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511766 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1083610, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.627692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511774 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1083519, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.616949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511778 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:16:26.511782 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1083519, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.616949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511789 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1083519, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.616949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511793 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1083569, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6226666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511800 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1083551, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6203954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511804 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1083551, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6203954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511808 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083524, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6174502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511815 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1083551, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6203954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511819 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1083548, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6199784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511826 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1083519, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.616949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511830 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1083541, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6194444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 22:16:26.511837 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1083548, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6199784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511841 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1083551, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6203954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511845 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1083548, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6199784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511852 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1083610, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.627692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511856 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:16:26.511860 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1083610, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.627692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511864 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:16:26.511870 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1083548, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6199784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511896 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1083610, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.627692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511901 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:16:26.511905 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1083610, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.627692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 22:16:26.511909 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:16:26.511913 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1083553, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6208737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 22:16:26.511917 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1083544, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6199784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 22:16:26.511924 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1083536, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6186788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 22:16:26.511929 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083574, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6237862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 22:16:26.511936 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083517, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.616724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 22:16:26.511946 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083613, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.628084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 22:16:26.511951 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1083569, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6226666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 22:16:26.511955 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083524, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6174502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 22:16:26.511960 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1083519, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.616949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 22:16:26.511967 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1083551, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6203954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 22:16:26.511972 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1083548, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6199784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 22:16:26.511983 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1083610, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.627692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 22:16:26.511987 | orchestrator | 2025-09-27 22:16:26.511992 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-27 22:16:26.511996 | orchestrator | Saturday 27 September 2025 22:14:57 +0000 (0:00:23.355) 0:00:45.697 **** 2025-09-27 22:16:26.512001 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-27 22:16:26.512005 | orchestrator | 2025-09-27 22:16:26.512010 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-27 22:16:26.512015 | orchestrator | Saturday 27 September 2025 22:14:58 +0000 (0:00:00.599) 0:00:46.296 **** 2025-09-27 22:16:26.512020 | orchestrator | [WARNING]: Skipped 2025-09-27 22:16:26.512024 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 22:16:26.512029 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-27 22:16:26.512034 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 22:16:26.512038 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-27 22:16:26.512043 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-27 22:16:26.512047 | orchestrator | [WARNING]: Skipped 2025-09-27 22:16:26.512052 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 22:16:26.512056 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-27 22:16:26.512061 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 22:16:26.512066 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-27 22:16:26.512070 | orchestrator | [WARNING]: Skipped 2025-09-27 22:16:26.512075 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 22:16:26.512079 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-27 22:16:26.512084 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 22:16:26.512088 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-27 22:16:26.512092 | orchestrator | [WARNING]: Skipped 2025-09-27 22:16:26.512097 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 22:16:26.512101 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-27 22:16:26.512106 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 22:16:26.512110 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-27 22:16:26.512115 | orchestrator | [WARNING]: Skipped 2025-09-27 22:16:26.512119 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 22:16:26.512125 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-27 22:16:26.512132 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 22:16:26.512138 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-27 22:16:26.512144 | orchestrator | [WARNING]: Skipped 2025-09-27 22:16:26.512154 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 22:16:26.512163 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-27 22:16:26.512169 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 22:16:26.512176 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-27 22:16:26.512182 | orchestrator | [WARNING]: Skipped 2025-09-27 22:16:26.512193 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 22:16:26.512199 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-27 22:16:26.512206 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 22:16:26.512212 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-27 22:16:26.512219 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-27 22:16:26.512226 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 22:16:26.512238 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-27 22:16:26.512245 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-27 22:16:26.512252 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-27 22:16:26.512259 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-27 22:16:26.512266 | orchestrator | 2025-09-27 22:16:26.512273 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-27 22:16:26.512280 | orchestrator | Saturday 27 September 2025 22:14:59 +0000 (0:00:01.834) 0:00:48.131 **** 2025-09-27 22:16:26.512287 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-27 22:16:26.512294 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:16:26.512301 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-27 22:16:26.512308 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-27 22:16:26.512315 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:16:26.512322 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:16:26.512329 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-27 22:16:26.512385 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:16:26.512392 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-27 22:16:26.512399 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:16:26.512405 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-27 22:16:26.512411 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:16:26.512443 | orchestrator | fatal: [testbed-manager]: FAILED! => {"msg": "{{ prometheus_blackbox_exporter_endpoints_default | selectattr('enabled', 'true') | map(attribute='endpoints') | flatten | union(prometheus_blackbox_exporter_endpoints_custom) | unique | select | list }}: [{'endpoints': ['aodh:os_endpoint:{{ aodh_public_endpoint }}', \"{{ ('aodh_internal:os_endpoint:' + aodh_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_aodh | bool }}'}, {'endpoints': ['barbican:os_endpoint:{{ barbican_public_endpoint }}', \"{{ ('barbican_internal:os_endpoint:' + barbican_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_barbican | bool }}'}, {'endpoints': ['blazar:os_endpoint:{{ blazar_public_base_endpoint }}', \"{{ ('blazar_internal:os_endpoint:' + blazar_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_blazar | bool }}'}, {'endpoints': ['ceph_rgw:http_2xx:{{ ceph_rgw_public_base_endpoint }}', \"{{ ('ceph_rgw_internal:http_2xx:' + ceph_rgw_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_ceph_rgw | bool }}'}, {'endpoints': ['cinder:os_endpoint:{{ cinder_public_base_endpoint }}', \"{{ ('cinder_internal:os_endpoint:' + cinder_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_cinder | bool }}'}, {'endpoints': ['cloudkitty:os_endpoint:{{ cloudkitty_public_endpoint }}', \"{{ ('cloudkitty_internal:os_endpoint:' + cloudkitty_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_cloudkitty | bool }}'}, {'endpoints': ['designate:os_endpoint:{{ designate_public_endpoint }}', \"{{ ('designate_internal:os_endpoint:' + designate_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_designate | bool }}'}, {'endpoints': ['glance:os_endpoint:{{ glance_public_endpoint }}', \"{{ ('glance_internal:os_endpoint:' + glance_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_glance | bool }}'}, {'endpoints': ['gnocchi:os_endpoint:{{ gnocchi_public_endpoint }}', \"{{ ('gnocchi_internal:os_endpoint:' + gnocchi_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_gnocchi | bool }}'}, {'endpoints': ['heat:os_endpoint:{{ heat_public_base_endpoint }}', \"{{ ('heat_internal:os_endpoint:' + heat_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\", 'heat_cfn:os_endpoint:{{ heat_cfn_public_base_endpoint }}', \"{{ ('heat_cfn_internal:os_endpoint:' + heat_cfn_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_heat | bool }}'}, {'endpoints': ['horizon:http_2xx:{{ horizon_public_endpoint }}', \"{{ ('horizon_internal:http_2xx:' + horizon_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_horizon | bool }}'}, {'endpoints': ['ironic:os_endpoint:{{ ironic_public_endpoint }}', \"{{ ('ironic_internal:os_endpoint:' + ironic_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\", 'ironic_inspector:os_endpoint:{{ ironic_inspector_public_endpoint }}', \"{{ ('ironic_inspector_internal:os_endpoint:' + ironic_inspector_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_ironic | bool }}'}, {'endpoints': ['keystone:os_endpoint:{{ keystone_public_url }}', \"{{ ('keystone_internal:os_endpoint:' + keystone_internal_url) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_keystone | bool }}'}, {'endpoints': ['magnum:os_endpoint:{{ magnum_public_base_endpoint }}', \"{{ ('magnum_internal:os_endpoint:' + magnum_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_magnum | bool }}'}, {'endpoints': ['manila:os_endpoint:{{ manila_public_base_endpoint }}', \"{{ ('manila_internal:os_endpoint:' + manila_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_manila | bool }}'}, {'endpoints': ['masakari:os_endpoint:{{ masakari_public_endpoint }}', \"{{ ('masakari_internal:os_endpoint:' + masakari_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_masakari | bool }}'}, {'endpoints': ['mistral:os_endpoint:{{ mistral_public_base_endpoint }}', \"{{ ('mistral_internal:os_endpoint:' + mistral_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_mistral | bool }}'}, {'endpoints': ['neutron:os_endpoint:{{ neutron_public_endpoint }}', \"{{ ('neutron_internal:os_endpoint:' + neutron_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_neutron | bool }}'}, {'endpoints': ['nova:os_endpoint:{{ nova_public_base_endpoint }}', \"{{ ('nova_internal:os_endpoint:' + nova_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_nova | bool }}'}, {'endpoints': ['octavia:os_endpoint:{{ octavia_public_endpoint }}', \"{{ ('octavia_internal:os_endpoint:' + octavia_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_octavia | bool }}'}, {'endpoints': ['placement:os_endpoint:{{ placement_public_endpoint }}', \"{{ ('placement_internal:os_endpoint:' + placement_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_placement | bool }}'}, {'endpoints': ['skyline_apiserver:os_endpoint:{{ skyline_apiserver_public_endpoint }}', \"{{ ('skyline_apiserver_internal:os_endpoint:' + skyline_apiserver_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\", 'skyline_console:os_endpoint:{{ skyline_console_public_endpoint }}', \"{{ ('skyline_console_internal:os_endpoint:' + skyline_console_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_skyline | bool }}'}, {'endpoints': ['swift:os_endpoint:{{ swift_public_base_endpoint }}', \"{{ ('swift_internal:os_endpoint:' + swift_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_swift | bool }}'}, {'endpoints': ['tacker:os_endpoint:{{ tacker_public_endpoint }}', \"{{ ('tacker_internal:os_endpoint:' + tacker_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_tacker | bool }}'}, {'endpoints': ['trove:os_endpoint:{{ trove_public_base_endpoint }}', \"{{ ('trove_internal:os_endpoint:' + trove_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_trove | bool }}'}, {'endpoints': ['venus:os_endpoint:{{ venus_public_endpoint }}', \"{{ ('venus_internal:os_endpoint:' + venus_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_venus | bool }}'}, {'endpoints': ['watcher:os_endpoint:{{ watcher_public_endpoint }}', \"{{ ('watcher_internal:os_endpoint:' + watcher_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_watcher | bool }}'}, {'endpoints': ['zun:os_endpoint:{{ zun_public_base_endpoint }}', \"{{ ('zun_internal:os_endpoint:' + zun_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_zun | bool }}'}, {'endpoints': \"{% set etcd_endpoints = [] %}{% for host in groups.get('etcd', []) %}{{ etcd_endpoints.append('etcd_' + host + ':http_2xx:' + hostvars[host]['etcd_protocol'] + '://' + ('api' | kolla_address(host) | put_address_in_context('url')) + ':' + hostvars[host]['etcd_client_port'] + '/metrics')}}{% endfor %}{{ etcd_endpoints }}\", 'enabled': '{{ enable_etcd | bool }}'}, {'endpoints': ['grafana:http_2xx:{{ grafana_public_endpoint }}', \"{{ ('grafana_internal:http_2xx:' + grafana_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_grafana | bool }}'}, {'endpoints': ['opensearch:http_2xx:{{ opensearch_internal_endpoint }}'], 'enabled': '{{ enable_opensearch | bool }}'}, {'endpoints': ['opensearch_dashboards:http_2xx_opensearch_dashboards:{{ opensearch_dashboards_internal_endpoint }}/api/status'], 'enabled': '{{ enable_opensearch_dashboards | bool }}'}, {'endpoints': ['opensearch_dashboards_external:http_2xx_opensearch_dashboards:{{ opensearch_dashboards_external_endpoint }}/api/status'], 'enabled': '{{ enable_opensearch_dashboards_external | bool }}'}, {'endpoints': ['prometheus:http_2xx_prometheus:{{ prometheus_public_endpoint if enable_prometheus_server_external else prometheus_internal_endpoint }}/-/healthy'], 'enabled': '{{ enable_prometheus | bool }}'}, {'endpoints': ['prometheus_alertmanager:http_2xx_alertmanager:{{ prometheus_alertmanager_public_endpoint if enable_prometheus_alertmanager_external else prometheus_alertmanager_internal_endpoint }}'], 'enabled': '{{ enable_prometheus_alertmanager | bool }}'}, {'endpoints': \"{% set rabbitmq_endpoints = [] %}{% for host in groups.get('rabbitmq', []) %}{{ rabbitmq_endpoints.append('rabbitmq_' + host + (':tls_connect:' if rabbitmq_enable_tls | bool else ':tcp_connect:') + ('api' | kolla_address(host) | put_address_in_context('url')) + ':' + hostvars[host]['rabbitmq_port'] ) }}{% endfor %}{{ rabbitmq_endpoints }}\", 'enabled': '{{ enable_rabbitmq | bool }}'}, {'endpoints': \"{% set redis_endpoints = [] %}{% for host in groups.get('redis', []) %}{{ redis_endpoints.append('redis_' + host + ':tcp_connect:' + ('api' | kolla_address(host) | put_address_in_context('url')) + ':' + hostvars[host]['redis_port']) }}{% endfor %}{{ redis_endpoints }}\", 'enabled': '{{ enable_redis | bool }}'}]: 'swift_public_base_endpoint' is undefined"} 2025-09-27 22:16:26.512464 | orchestrator | 2025-09-27 22:16:26.512470 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-27 22:16:26.512477 | orchestrator | Saturday 27 September 2025 22:15:10 +0000 (0:00:10.708) 0:00:58.840 **** 2025-09-27 22:16:26.512489 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-27 22:16:26.512495 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:16:26.512501 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-27 22:16:26.512508 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:16:26.512514 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-27 22:16:26.512521 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:16:26.512528 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-27 22:16:26.512534 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:16:26.512540 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-27 22:16:26.512547 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:16:26.512553 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-27 22:16:26.512560 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:16:26.512567 | orchestrator | 2025-09-27 22:16:26.512573 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-27 22:16:26.512580 | orchestrator | Saturday 27 September 2025 22:15:12 +0000 (0:00:01.943) 0:01:00.783 **** 2025-09-27 22:16:26.512587 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-27 22:16:26.512598 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-27 22:16:26.512605 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:16:26.512611 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:16:26.512618 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-27 22:16:26.512624 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:16:26.512631 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-27 22:16:26.512637 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:16:26.512644 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-27 22:16:26.512651 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:16:26.512661 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-27 22:16:26.512667 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:16:26.512674 | orchestrator | 2025-09-27 22:16:26.512681 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-27 22:16:26.512688 | orchestrator | Saturday 27 September 2025 22:15:14 +0000 (0:00:01.507) 0:01:02.291 **** 2025-09-27 22:16:26.512694 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 22:16:26.512701 | orchestrator | 2025-09-27 22:16:26.512708 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-27 22:16:26.512715 | orchestrator | Saturday 27 September 2025 22:15:14 +0000 (0:00:00.536) 0:01:02.827 **** 2025-09-27 22:16:26.512722 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:16:26.512728 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:16:26.512735 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:16:26.512741 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:16:26.512748 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:16:26.512754 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:16:26.512761 | orchestrator | 2025-09-27 22:16:26.512767 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-27 22:16:26.512778 | orchestrator | Saturday 27 September 2025 22:15:15 +0000 (0:00:00.725) 0:01:03.553 **** 2025-09-27 22:16:26.512785 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:16:26.512791 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:16:26.512798 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:16:26.512804 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:16:26.512811 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:16:26.512817 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:16:26.512824 | orchestrator | 2025-09-27 22:16:26.512829 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-27 22:16:26.512836 | orchestrator | Saturday 27 September 2025 22:15:17 +0000 (0:00:01.853) 0:01:05.406 **** 2025-09-27 22:16:26.512842 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-27 22:16:26.512848 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-27 22:16:26.512854 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:16:26.512860 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:16:26.512866 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-27 22:16:26.512872 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:16:26.512893 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-27 22:16:26.512900 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:16:26.512906 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-27 22:16:26.512912 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:16:26.512919 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-27 22:16:26.512925 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:16:26.512931 | orchestrator | 2025-09-27 22:16:26.512936 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-27 22:16:26.512942 | orchestrator | Saturday 27 September 2025 22:15:18 +0000 (0:00:01.381) 0:01:06.788 **** 2025-09-27 22:16:26.512949 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-27 22:16:26.512955 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:16:26.512962 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-27 22:16:26.512968 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:16:26.512974 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-27 22:16:26.512980 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:16:26.512987 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-27 22:16:26.512995 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:16:26.513002 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-27 22:16:26.513009 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:16:26.513015 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-27 22:16:26.513022 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:16:26.513029 | orchestrator | 2025-09-27 22:16:26.513036 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-27 22:16:26.513047 | orchestrator | Saturday 27 September 2025 22:15:19 +0000 (0:00:01.253) 0:01:08.041 **** 2025-09-27 22:16:26.513054 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:16:26.513060 | orchestrator | 2025-09-27 22:16:26.513067 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-27 22:16:26.513074 | orchestrator | Saturday 27 September 2025 22:15:20 +0000 (0:00:00.726) 0:01:08.768 **** 2025-09-27 22:16:26.513081 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:16:26.513093 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:16:26.513100 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:16:26.513107 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:16:26.513114 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:16:26.513122 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:16:26.513128 | orchestrator | 2025-09-27 22:16:26.513136 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-27 22:16:26.513143 | orchestrator | Saturday 27 September 2025 22:15:21 +0000 (0:00:00.504) 0:01:09.272 **** 2025-09-27 22:16:26.513150 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:16:26.513157 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:16:26.513163 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:16:26.513170 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:16:26.513177 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:16:26.513189 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:16:26.513196 | orchestrator | 2025-09-27 22:16:26.513203 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-27 22:16:26.513209 | orchestrator | Saturday 27 September 2025 22:15:21 +0000 (0:00:00.646) 0:01:09.918 **** 2025-09-27 22:16:26.513217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.513224 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.513231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.513238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.513246 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.513256 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 22:16:26.513270 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.513282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.513290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.513296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.513304 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.513311 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.513318 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.513332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.513340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.513352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.513360 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.513365 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.513370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.513374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.513378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 22:16:26.513391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.513399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.513410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 22:16:26.513418 | orchestrator | 2025-09-27 22:16:26.513424 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-27 22:16:26.513431 | orchestrator | Saturday 27 September 2025 22:15:25 +0000 (0:00:04.280) 0:01:14.199 **** 2025-09-27 22:16:26.513439 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-09-27 22:16:26.513446 | orchestrator | 2025-09-27 22:16:26.513453 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-27 22:16:26.513459 | orchestrator | Saturday 27 September 2025 22:15:29 +0000 (0:00:03.610) 0:01:17.810 **** 2025-09-27 22:16:26.513466 | orchestrator | 2025-09-27 22:16:26.513472 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-27 22:16:26.513478 | orchestrator | Saturday 27 September 2025 22:15:29 +0000 (0:00:00.063) 0:01:17.873 **** 2025-09-27 22:16:26.513485 | orchestrator | 2025-09-27 22:16:26.513491 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-27 22:16:26.513498 | orchestrator | Saturday 27 September 2025 22:15:29 +0000 (0:00:00.068) 0:01:17.942 **** 2025-09-27 22:16:26.513503 | orchestrator | 2025-09-27 22:16:26.513510 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-27 22:16:26.513515 | orchestrator | Saturday 27 September 2025 22:15:29 +0000 (0:00:00.071) 0:01:18.013 **** 2025-09-27 22:16:26.513521 | orchestrator | 2025-09-27 22:16:26.513527 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-27 22:16:26.513533 | orchestrator | Saturday 27 September 2025 22:15:29 +0000 (0:00:00.065) 0:01:18.078 **** 2025-09-27 22:16:26.513538 | orchestrator | 2025-09-27 22:16:26.513545 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-27 22:16:26.513550 | orchestrator | Saturday 27 September 2025 22:15:30 +0000 (0:00:00.253) 0:01:18.332 **** 2025-09-27 22:16:26.513557 | orchestrator | 2025-09-27 22:16:26.513563 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-27 22:16:26.513570 | orchestrator | Saturday 27 September 2025 22:15:30 +0000 (0:00:00.061) 0:01:18.394 **** 2025-09-27 22:16:26.513583 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:16:26.513589 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:16:26.513596 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:16:26.513602 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:16:26.513606 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:16:26.513610 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:16:26.513614 | orchestrator | 2025-09-27 22:16:26.513618 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-27 22:16:26.513622 | orchestrator | Saturday 27 September 2025 22:15:43 +0000 (0:00:12.967) 0:01:31.361 **** 2025-09-27 22:16:26.513626 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:16:26.513630 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:16:26.513634 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:16:26.513638 | orchestrator | 2025-09-27 22:16:26.513642 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-27 22:16:26.513646 | orchestrator | Saturday 27 September 2025 22:15:48 +0000 (0:00:05.016) 0:01:36.378 **** 2025-09-27 22:16:26.513651 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:16:26.513657 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:16:26.513663 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:16:26.513669 | orchestrator | 2025-09-27 22:16:26.513675 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-27 22:16:26.513682 | orchestrator | Saturday 27 September 2025 22:15:53 +0000 (0:00:05.287) 0:01:41.665 **** 2025-09-27 22:16:26.513688 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:16:26.513695 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:16:26.513702 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:16:26.513708 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:16:26.513714 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:16:26.513721 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:16:26.513727 | orchestrator | 2025-09-27 22:16:26.513733 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-27 22:16:26.513740 | orchestrator | Saturday 27 September 2025 22:16:06 +0000 (0:00:13.238) 0:01:54.904 **** 2025-09-27 22:16:26.513746 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:16:26.513752 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:16:26.513758 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:16:26.513766 | orchestrator | 2025-09-27 22:16:26.513773 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-27 22:16:26.513779 | orchestrator | Saturday 27 September 2025 22:16:12 +0000 (0:00:05.812) 0:02:00.717 **** 2025-09-27 22:16:26.513786 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:16:26.513797 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:16:26.513804 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:16:26.513810 | orchestrator | 2025-09-27 22:16:26.513816 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:16:26.513822 | orchestrator | testbed-manager : ok=11  changed=4  unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2025-09-27 22:16:26.513830 | orchestrator | testbed-node-0 : ok=17  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-27 22:16:26.513836 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-27 22:16:26.513842 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-27 22:16:26.513854 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-27 22:16:26.513860 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-27 22:16:26.513872 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-27 22:16:26.513901 | orchestrator | 2025-09-27 22:16:26.513908 | orchestrator | 2025-09-27 22:16:26.513914 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:16:26.513920 | orchestrator | Saturday 27 September 2025 22:16:24 +0000 (0:00:11.565) 0:02:12.283 **** 2025-09-27 22:16:26.513926 | orchestrator | =============================================================================== 2025-09-27 22:16:26.513932 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 23.36s 2025-09-27 22:16:26.513939 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.24s 2025-09-27 22:16:26.513945 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.97s 2025-09-27 22:16:26.513952 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.57s 2025-09-27 22:16:26.513958 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 10.71s 2025-09-27 22:16:26.513964 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.81s 2025-09-27 22:16:26.513970 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.40s 2025-09-27 22:16:26.513977 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.29s 2025-09-27 22:16:26.513983 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.29s 2025-09-27 22:16:26.513989 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.02s 2025-09-27 22:16:26.513995 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.28s 2025-09-27 22:16:26.514001 | orchestrator | prometheus : Creating prometheus database user and setting permissions --- 3.61s 2025-09-27 22:16:26.514007 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.02s 2025-09-27 22:16:26.514059 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 1.94s 2025-09-27 22:16:26.514070 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 1.89s 2025-09-27 22:16:26.514077 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 1.85s 2025-09-27 22:16:26.514083 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.83s 2025-09-27 22:16:26.514090 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.51s 2025-09-27 22:16:26.514097 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 1.40s 2025-09-27 22:16:26.514103 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.39s 2025-09-27 22:16:26.514110 | orchestrator | 2025-09-27 22:16:26 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:16:26.514117 | orchestrator | 2025-09-27 22:16:26 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:16:26.514124 | orchestrator | 2025-09-27 22:16:26 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:16:29.550066 | orchestrator | 2025-09-27 22:16:29 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:16:29.551556 | orchestrator | 2025-09-27 22:16:29 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:16:29.552103 | orchestrator | 2025-09-27 22:16:29 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:16:29.553007 | orchestrator | 2025-09-27 22:16:29 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:16:29.553057 | orchestrator | 2025-09-27 22:16:29 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:16:32.586114 | orchestrator | 2025-09-27 22:16:32 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:16:32.586243 | orchestrator | 2025-09-27 22:16:32 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:16:32.587456 | orchestrator | 2025-09-27 22:16:32 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:16:32.588072 | orchestrator | 2025-09-27 22:16:32 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:16:32.588113 | orchestrator | 2025-09-27 22:16:32 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:16:35.612740 | orchestrator | 2025-09-27 22:16:35 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:16:35.612934 | orchestrator | 2025-09-27 22:16:35 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:16:35.613463 | orchestrator | 2025-09-27 22:16:35 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:16:35.614158 | orchestrator | 2025-09-27 22:16:35 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:16:35.614234 | orchestrator | 2025-09-27 22:16:35 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:16:38.636043 | orchestrator | 2025-09-27 22:16:38 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:16:38.636283 | orchestrator | 2025-09-27 22:16:38 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:16:38.637122 | orchestrator | 2025-09-27 22:16:38 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:16:38.637790 | orchestrator | 2025-09-27 22:16:38 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:16:38.637813 | orchestrator | 2025-09-27 22:16:38 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:16:41.672280 | orchestrator | 2025-09-27 22:16:41 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:16:41.672565 | orchestrator | 2025-09-27 22:16:41 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:16:41.673151 | orchestrator | 2025-09-27 22:16:41 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:16:41.673656 | orchestrator | 2025-09-27 22:16:41 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:16:41.673929 | orchestrator | 2025-09-27 22:16:41 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:16:44.697052 | orchestrator | 2025-09-27 22:16:44 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:16:44.697204 | orchestrator | 2025-09-27 22:16:44 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:16:44.697714 | orchestrator | 2025-09-27 22:16:44 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:16:44.698278 | orchestrator | 2025-09-27 22:16:44 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:16:44.698290 | orchestrator | 2025-09-27 22:16:44 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:16:47.725713 | orchestrator | 2025-09-27 22:16:47 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:16:47.727002 | orchestrator | 2025-09-27 22:16:47 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:16:47.727562 | orchestrator | 2025-09-27 22:16:47 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:16:47.728169 | orchestrator | 2025-09-27 22:16:47 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:16:47.728244 | orchestrator | 2025-09-27 22:16:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:16:50.746382 | orchestrator | 2025-09-27 22:16:50 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:16:50.747272 | orchestrator | 2025-09-27 22:16:50 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:16:50.747711 | orchestrator | 2025-09-27 22:16:50 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:16:50.748621 | orchestrator | 2025-09-27 22:16:50 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:16:50.748671 | orchestrator | 2025-09-27 22:16:50 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:16:53.781916 | orchestrator | 2025-09-27 22:16:53 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:16:53.782888 | orchestrator | 2025-09-27 22:16:53 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:16:53.784789 | orchestrator | 2025-09-27 22:16:53 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:16:53.785922 | orchestrator | 2025-09-27 22:16:53 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:16:53.786338 | orchestrator | 2025-09-27 22:16:53 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:16:56.814220 | orchestrator | 2025-09-27 22:16:56 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:16:56.814494 | orchestrator | 2025-09-27 22:16:56 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:16:56.815714 | orchestrator | 2025-09-27 22:16:56 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:16:56.816478 | orchestrator | 2025-09-27 22:16:56 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:16:56.816764 | orchestrator | 2025-09-27 22:16:56 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:16:59.861531 | orchestrator | 2025-09-27 22:16:59 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:16:59.863042 | orchestrator | 2025-09-27 22:16:59 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:16:59.864507 | orchestrator | 2025-09-27 22:16:59 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:16:59.865658 | orchestrator | 2025-09-27 22:16:59 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:16:59.865808 | orchestrator | 2025-09-27 22:16:59 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:17:02.907089 | orchestrator | 2025-09-27 22:17:02 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:17:02.907199 | orchestrator | 2025-09-27 22:17:02 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:17:02.908455 | orchestrator | 2025-09-27 22:17:02 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:17:02.908814 | orchestrator | 2025-09-27 22:17:02 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:17:02.908846 | orchestrator | 2025-09-27 22:17:02 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:17:05.934625 | orchestrator | 2025-09-27 22:17:05 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:17:05.934829 | orchestrator | 2025-09-27 22:17:05 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:17:05.939628 | orchestrator | 2025-09-27 22:17:05 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:17:05.940086 | orchestrator | 2025-09-27 22:17:05 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:17:05.940114 | orchestrator | 2025-09-27 22:17:05 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:17:09.006658 | orchestrator | 2025-09-27 22:17:09 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:17:09.007799 | orchestrator | 2025-09-27 22:17:09 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:17:09.009055 | orchestrator | 2025-09-27 22:17:09 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:17:09.010481 | orchestrator | 2025-09-27 22:17:09 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:17:09.010551 | orchestrator | 2025-09-27 22:17:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:17:12.054153 | orchestrator | 2025-09-27 22:17:12 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:17:12.054418 | orchestrator | 2025-09-27 22:17:12 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:17:12.055033 | orchestrator | 2025-09-27 22:17:12 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:17:12.055627 | orchestrator | 2025-09-27 22:17:12 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:17:12.055648 | orchestrator | 2025-09-27 22:17:12 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:17:15.085976 | orchestrator | 2025-09-27 22:17:15 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:17:15.086264 | orchestrator | 2025-09-27 22:17:15 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:17:15.087250 | orchestrator | 2025-09-27 22:17:15 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:17:15.087797 | orchestrator | 2025-09-27 22:17:15 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:17:15.087929 | orchestrator | 2025-09-27 22:17:15 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:17:18.115200 | orchestrator | 2025-09-27 22:17:18 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:17:18.116343 | orchestrator | 2025-09-27 22:17:18 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:17:18.117578 | orchestrator | 2025-09-27 22:17:18 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:17:18.119047 | orchestrator | 2025-09-27 22:17:18 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:17:18.119081 | orchestrator | 2025-09-27 22:17:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:17:21.148340 | orchestrator | 2025-09-27 22:17:21 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:17:21.150163 | orchestrator | 2025-09-27 22:17:21 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:17:21.151983 | orchestrator | 2025-09-27 22:17:21 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:17:21.154540 | orchestrator | 2025-09-27 22:17:21 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:17:21.154593 | orchestrator | 2025-09-27 22:17:21 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:17:24.194261 | orchestrator | 2025-09-27 22:17:24 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:17:24.196924 | orchestrator | 2025-09-27 22:17:24 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:17:24.199037 | orchestrator | 2025-09-27 22:17:24 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:17:24.200770 | orchestrator | 2025-09-27 22:17:24 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:17:24.200998 | orchestrator | 2025-09-27 22:17:24 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:17:27.247254 | orchestrator | 2025-09-27 22:17:27 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:17:27.250735 | orchestrator | 2025-09-27 22:17:27 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:17:27.252451 | orchestrator | 2025-09-27 22:17:27 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:17:27.254768 | orchestrator | 2025-09-27 22:17:27 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:17:27.254832 | orchestrator | 2025-09-27 22:17:27 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:17:30.288624 | orchestrator | 2025-09-27 22:17:30 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:17:30.289051 | orchestrator | 2025-09-27 22:17:30 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:17:30.289943 | orchestrator | 2025-09-27 22:17:30 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:17:30.290721 | orchestrator | 2025-09-27 22:17:30 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:17:30.290764 | orchestrator | 2025-09-27 22:17:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:17:33.324249 | orchestrator | 2025-09-27 22:17:33 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:17:33.324722 | orchestrator | 2025-09-27 22:17:33 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:17:33.325743 | orchestrator | 2025-09-27 22:17:33 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:17:33.326293 | orchestrator | 2025-09-27 22:17:33 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:17:33.326382 | orchestrator | 2025-09-27 22:17:33 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:17:36.366617 | orchestrator | 2025-09-27 22:17:36 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:17:36.366941 | orchestrator | 2025-09-27 22:17:36 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:17:36.367931 | orchestrator | 2025-09-27 22:17:36 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:17:36.369081 | orchestrator | 2025-09-27 22:17:36 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:17:36.369112 | orchestrator | 2025-09-27 22:17:36 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:17:39.406071 | orchestrator | 2025-09-27 22:17:39 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:17:39.407331 | orchestrator | 2025-09-27 22:17:39 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:17:39.408788 | orchestrator | 2025-09-27 22:17:39 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:17:39.410326 | orchestrator | 2025-09-27 22:17:39 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:17:39.410369 | orchestrator | 2025-09-27 22:17:39 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:17:42.450925 | orchestrator | 2025-09-27 22:17:42 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:17:42.452283 | orchestrator | 2025-09-27 22:17:42 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:17:42.453956 | orchestrator | 2025-09-27 22:17:42 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:17:42.455416 | orchestrator | 2025-09-27 22:17:42 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:17:42.455642 | orchestrator | 2025-09-27 22:17:42 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:17:45.489420 | orchestrator | 2025-09-27 22:17:45 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:17:45.492969 | orchestrator | 2025-09-27 22:17:45 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:17:45.495435 | orchestrator | 2025-09-27 22:17:45 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:17:45.497722 | orchestrator | 2025-09-27 22:17:45 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:17:45.498411 | orchestrator | 2025-09-27 22:17:45 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:17:48.535662 | orchestrator | 2025-09-27 22:17:48 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:17:48.537314 | orchestrator | 2025-09-27 22:17:48 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:17:48.539162 | orchestrator | 2025-09-27 22:17:48 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:17:48.541133 | orchestrator | 2025-09-27 22:17:48 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:17:48.541257 | orchestrator | 2025-09-27 22:17:48 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:17:51.582356 | orchestrator | 2025-09-27 22:17:51 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:17:51.583892 | orchestrator | 2025-09-27 22:17:51 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:17:51.586127 | orchestrator | 2025-09-27 22:17:51 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:17:51.587355 | orchestrator | 2025-09-27 22:17:51 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:17:51.588186 | orchestrator | 2025-09-27 22:17:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:17:54.629342 | orchestrator | 2025-09-27 22:17:54 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:17:54.631479 | orchestrator | 2025-09-27 22:17:54 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:17:54.632715 | orchestrator | 2025-09-27 22:17:54 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:17:54.634975 | orchestrator | 2025-09-27 22:17:54 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:17:54.635034 | orchestrator | 2025-09-27 22:17:54 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:17:57.684988 | orchestrator | 2025-09-27 22:17:57 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:17:57.686565 | orchestrator | 2025-09-27 22:17:57 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:17:57.687682 | orchestrator | 2025-09-27 22:17:57 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:17:57.689105 | orchestrator | 2025-09-27 22:17:57 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:17:57.689133 | orchestrator | 2025-09-27 22:17:57 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:18:00.730211 | orchestrator | 2025-09-27 22:18:00 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:18:00.732187 | orchestrator | 2025-09-27 22:18:00 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:18:00.733378 | orchestrator | 2025-09-27 22:18:00 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:18:00.735302 | orchestrator | 2025-09-27 22:18:00 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:18:00.735341 | orchestrator | 2025-09-27 22:18:00 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:18:03.781718 | orchestrator | 2025-09-27 22:18:03 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:18:03.782972 | orchestrator | 2025-09-27 22:18:03 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:18:03.785260 | orchestrator | 2025-09-27 22:18:03 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:18:03.787907 | orchestrator | 2025-09-27 22:18:03 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:18:03.788079 | orchestrator | 2025-09-27 22:18:03 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:18:06.818130 | orchestrator | 2025-09-27 22:18:06 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:18:06.819672 | orchestrator | 2025-09-27 22:18:06 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:18:06.821383 | orchestrator | 2025-09-27 22:18:06 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:18:06.823281 | orchestrator | 2025-09-27 22:18:06 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:18:06.823444 | orchestrator | 2025-09-27 22:18:06 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:18:09.857180 | orchestrator | 2025-09-27 22:18:09 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:18:09.859983 | orchestrator | 2025-09-27 22:18:09 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state STARTED 2025-09-27 22:18:09.862215 | orchestrator | 2025-09-27 22:18:09 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:18:09.864424 | orchestrator | 2025-09-27 22:18:09 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:18:09.864963 | orchestrator | 2025-09-27 22:18:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:18:12.889583 | orchestrator | 2025-09-27 22:18:12 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:18:12.890638 | orchestrator | 2025-09-27 22:18:12 | INFO  | Task ef9011fc-67b8-4d56-b94b-4f7057f444cf is in state SUCCESS 2025-09-27 22:18:12.892356 | orchestrator | 2025-09-27 22:18:12.892418 | orchestrator | 2025-09-27 22:18:12.892427 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:18:12.892436 | orchestrator | 2025-09-27 22:18:12.892443 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:18:12.892450 | orchestrator | Saturday 27 September 2025 22:14:33 +0000 (0:00:00.238) 0:00:00.238 **** 2025-09-27 22:18:12.892457 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:18:12.892465 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:18:12.892472 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:18:12.892479 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:18:12.892508 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:18:12.892515 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:18:12.892521 | orchestrator | 2025-09-27 22:18:12.892528 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:18:12.892534 | orchestrator | Saturday 27 September 2025 22:14:34 +0000 (0:00:00.717) 0:00:00.955 **** 2025-09-27 22:18:12.892540 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-27 22:18:12.892547 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-27 22:18:12.892554 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-27 22:18:12.892560 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-27 22:18:12.892567 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-27 22:18:12.892573 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-27 22:18:12.892612 | orchestrator | 2025-09-27 22:18:12.892619 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-27 22:18:12.892625 | orchestrator | 2025-09-27 22:18:12.892632 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-27 22:18:12.892638 | orchestrator | Saturday 27 September 2025 22:14:36 +0000 (0:00:01.508) 0:00:02.464 **** 2025-09-27 22:18:12.892658 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:18:12.892666 | orchestrator | 2025-09-27 22:18:12.892672 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-27 22:18:12.892679 | orchestrator | Saturday 27 September 2025 22:14:38 +0000 (0:00:02.169) 0:00:04.634 **** 2025-09-27 22:18:12.892686 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-27 22:18:12.892692 | orchestrator | 2025-09-27 22:18:12.892699 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-27 22:18:12.892705 | orchestrator | Saturday 27 September 2025 22:14:41 +0000 (0:00:03.346) 0:00:07.980 **** 2025-09-27 22:18:12.892712 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-27 22:18:12.892719 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-27 22:18:12.892725 | orchestrator | 2025-09-27 22:18:12.892771 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-27 22:18:12.892779 | orchestrator | Saturday 27 September 2025 22:14:47 +0000 (0:00:05.867) 0:00:13.848 **** 2025-09-27 22:18:12.892785 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-27 22:18:12.892792 | orchestrator | 2025-09-27 22:18:12.892798 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-27 22:18:12.892805 | orchestrator | Saturday 27 September 2025 22:14:50 +0000 (0:00:03.120) 0:00:16.968 **** 2025-09-27 22:18:12.892900 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-27 22:18:12.892909 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-27 22:18:12.892916 | orchestrator | 2025-09-27 22:18:12.892922 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-27 22:18:12.892928 | orchestrator | Saturday 27 September 2025 22:14:54 +0000 (0:00:03.498) 0:00:20.467 **** 2025-09-27 22:18:12.892936 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-27 22:18:12.892943 | orchestrator | 2025-09-27 22:18:12.892951 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-27 22:18:12.892958 | orchestrator | Saturday 27 September 2025 22:14:57 +0000 (0:00:03.072) 0:00:23.539 **** 2025-09-27 22:18:12.892965 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-27 22:18:12.892972 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-27 22:18:12.892979 | orchestrator | 2025-09-27 22:18:12.892986 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-27 22:18:12.893000 | orchestrator | Saturday 27 September 2025 22:15:04 +0000 (0:00:07.311) 0:00:30.851 **** 2025-09-27 22:18:12.893014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:12.893047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:12.893067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:12.893080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.893092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.893111 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.893131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.893140 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.893154 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.893168 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.893184 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.893205 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.893215 | orchestrator | 2025-09-27 22:18:12.893232 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-27 22:18:12.893243 | orchestrator | Saturday 27 September 2025 22:15:06 +0000 (0:00:02.345) 0:00:33.196 **** 2025-09-27 22:18:12.893254 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:18:12.893265 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:18:12.893276 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:18:12.893284 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:18:12.893290 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:18:12.893298 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:18:12.893304 | orchestrator | 2025-09-27 22:18:12.893310 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-27 22:18:12.893317 | orchestrator | Saturday 27 September 2025 22:15:07 +0000 (0:00:00.659) 0:00:33.856 **** 2025-09-27 22:18:12.893323 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:18:12.893329 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:18:12.893335 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:18:12.893342 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:18:12.893382 | orchestrator | 2025-09-27 22:18:12.893389 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-27 22:18:12.893395 | orchestrator | Saturday 27 September 2025 22:15:08 +0000 (0:00:01.421) 0:00:35.277 **** 2025-09-27 22:18:12.893401 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-27 22:18:12.893408 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-27 22:18:12.893414 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-27 22:18:12.893420 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-27 22:18:12.893426 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-27 22:18:12.893437 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-27 22:18:12.893443 | orchestrator | 2025-09-27 22:18:12.893449 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-27 22:18:12.893456 | orchestrator | Saturday 27 September 2025 22:15:10 +0000 (0:00:01.829) 0:00:37.106 **** 2025-09-27 22:18:12.893464 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-27 22:18:12.893478 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-27 22:18:12.893485 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-27 22:18:12.893497 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-27 22:18:12.893507 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-27 22:18:12.893514 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-27 22:18:12.893526 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-27 22:18:12.893533 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-27 22:18:12.893544 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-27 22:18:12.893555 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-27 22:18:12.893562 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-27 22:18:12.893573 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-27 22:18:12.893580 | orchestrator | 2025-09-27 22:18:12.893586 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-27 22:18:12.893592 | orchestrator | Saturday 27 September 2025 22:15:14 +0000 (0:00:04.085) 0:00:41.191 **** 2025-09-27 22:18:12.893598 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-27 22:18:12.893605 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-27 22:18:12.893612 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-27 22:18:12.893618 | orchestrator | 2025-09-27 22:18:12.893624 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-27 22:18:12.893630 | orchestrator | Saturday 27 September 2025 22:15:16 +0000 (0:00:01.736) 0:00:42.928 **** 2025-09-27 22:18:12.893636 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-27 22:18:12.893643 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-27 22:18:12.893649 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-27 22:18:12.893655 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-27 22:18:12.893670 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-27 22:18:12.893686 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-27 22:18:12.893697 | orchestrator | 2025-09-27 22:18:12.893707 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-27 22:18:12.893718 | orchestrator | Saturday 27 September 2025 22:15:19 +0000 (0:00:02.778) 0:00:45.707 **** 2025-09-27 22:18:12.893727 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-27 22:18:12.893739 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-27 22:18:12.893753 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-27 22:18:12.893766 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-27 22:18:12.893774 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-27 22:18:12.893783 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-27 22:18:12.893793 | orchestrator | 2025-09-27 22:18:12.893802 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-27 22:18:12.893833 | orchestrator | Saturday 27 September 2025 22:15:20 +0000 (0:00:01.061) 0:00:46.769 **** 2025-09-27 22:18:12.893844 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:18:12.893853 | orchestrator | 2025-09-27 22:18:12.893881 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-27 22:18:12.893890 | orchestrator | Saturday 27 September 2025 22:15:20 +0000 (0:00:00.147) 0:00:46.917 **** 2025-09-27 22:18:12.893899 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:18:12.893908 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:18:12.893918 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:18:12.893927 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:18:12.893935 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:18:12.893943 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:18:12.893952 | orchestrator | 2025-09-27 22:18:12.893961 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-27 22:18:12.893975 | orchestrator | Saturday 27 September 2025 22:15:21 +0000 (0:00:00.632) 0:00:47.549 **** 2025-09-27 22:18:12.893987 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:18:12.893999 | orchestrator | 2025-09-27 22:18:12.894010 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-27 22:18:12.894081 | orchestrator | Saturday 27 September 2025 22:15:22 +0000 (0:00:01.277) 0:00:48.827 **** 2025-09-27 22:18:12.894092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:12.894106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:12.894130 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.894142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:12.894164 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.894172 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.894179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.894186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.894907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.895001 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.895026 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.895036 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.895046 | orchestrator | 2025-09-27 22:18:12.895057 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-27 22:18:12.895067 | orchestrator | Saturday 27 September 2025 22:15:25 +0000 (0:00:02.951) 0:00:51.778 **** 2025-09-27 22:18:12.895078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 22:18:12.895102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.895120 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:18:12.895131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 22:18:12.895146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.895155 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:18:12.895164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 22:18:12.895174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.895183 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:18:12.895194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.895215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.895225 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:18:12.895239 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.895249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.895258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.895268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.895283 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:18:12.895293 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:18:12.895302 | orchestrator | 2025-09-27 22:18:12.895311 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-27 22:18:12.895320 | orchestrator | Saturday 27 September 2025 22:15:26 +0000 (0:00:01.420) 0:00:53.199 **** 2025-09-27 22:18:12.895334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 22:18:12.895350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.895360 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:18:12.895369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 22:18:12.895379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.895388 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:18:12.895397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 22:18:12.895421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.895431 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:18:12.895440 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.895453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.895465 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:18:12.895475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.895485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.895502 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:18:12.895519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.895530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.895540 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:18:12.895550 | orchestrator | 2025-09-27 22:18:12.895560 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-27 22:18:12.895570 | orchestrator | Saturday 27 September 2025 22:15:28 +0000 (0:00:01.527) 0:00:54.727 **** 2025-09-27 22:18:12.895585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:12.895597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:12.895614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:12.895632 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.895648 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.895659 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.895670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.895692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.895708 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.895719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.895735 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.895746 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.895756 | orchestrator | 2025-09-27 22:18:12.895766 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-27 22:18:12.895776 | orchestrator | Saturday 27 September 2025 22:15:31 +0000 (0:00:02.684) 0:00:57.411 **** 2025-09-27 22:18:12.895787 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-27 22:18:12.895803 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:18:12.895835 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-27 22:18:12.895846 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:18:12.895856 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-27 22:18:12.895867 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-27 22:18:12.895877 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-27 22:18:12.895886 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:18:12.895895 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-27 22:18:12.895904 | orchestrator | 2025-09-27 22:18:12.895913 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-27 22:18:12.895922 | orchestrator | Saturday 27 September 2025 22:15:34 +0000 (0:00:02.979) 0:01:00.391 **** 2025-09-27 22:18:12.895931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:12.895949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:12.895963 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.895973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:12.895990 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.896006 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.896016 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.896030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.896041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.896057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.896066 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.896076 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.896085 | orchestrator | 2025-09-27 22:18:12.896094 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-27 22:18:12.896103 | orchestrator | Saturday 27 September 2025 22:15:40 +0000 (0:00:06.808) 0:01:07.199 **** 2025-09-27 22:18:12.896117 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:18:12.896126 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:18:12.896135 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:18:12.896144 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:18:12.896153 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:18:12.896162 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:18:12.896171 | orchestrator | 2025-09-27 22:18:12.896180 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-27 22:18:12.896189 | orchestrator | Saturday 27 September 2025 22:15:42 +0000 (0:00:01.833) 0:01:09.033 **** 2025-09-27 22:18:12.896203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 22:18:12.896220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.896230 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:18:12.896239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 22:18:12.896249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.896264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 22:18:12.896274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.896283 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:18:12.896293 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:18:12.896306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.896323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.896332 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:18:12.896341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.896350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.896360 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:18:12.896375 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.896390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 22:18:12.896406 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:18:12.896415 | orchestrator | 2025-09-27 22:18:12.896424 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-27 22:18:12.896433 | orchestrator | Saturday 27 September 2025 22:15:44 +0000 (0:00:01.371) 0:01:10.405 **** 2025-09-27 22:18:12.896442 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:18:12.896451 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:18:12.896460 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:18:12.896469 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:18:12.896478 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:18:12.896486 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:18:12.896495 | orchestrator | 2025-09-27 22:18:12.896504 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-27 22:18:12.896513 | orchestrator | Saturday 27 September 2025 22:15:44 +0000 (0:00:00.714) 0:01:11.119 **** 2025-09-27 22:18:12.896522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:12.896532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:12.896547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:12.896567 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.896577 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.896587 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.896596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.896611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.896636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.896651 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.896661 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.896670 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:12.896679 | orchestrator | 2025-09-27 22:18:12.896689 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-27 22:18:12.896698 | orchestrator | Saturday 27 September 2025 22:15:46 +0000 (0:00:02.033) 0:01:13.153 **** 2025-09-27 22:18:12.896707 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:18:12.896716 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:18:12.896725 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:18:12.896734 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:18:12.896743 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:18:12.896751 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:18:12.896760 | orchestrator | 2025-09-27 22:18:12.896769 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-27 22:18:12.896778 | orchestrator | Saturday 27 September 2025 22:15:47 +0000 (0:00:00.505) 0:01:13.659 **** 2025-09-27 22:18:12.896787 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:18:12.896796 | orchestrator | 2025-09-27 22:18:12.896805 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-27 22:18:12.896831 | orchestrator | Saturday 27 September 2025 22:15:49 +0000 (0:00:02.346) 0:01:16.006 **** 2025-09-27 22:18:12.896847 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:18:12.896855 | orchestrator | 2025-09-27 22:18:12.896864 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-27 22:18:12.896873 | orchestrator | Saturday 27 September 2025 22:15:51 +0000 (0:00:02.301) 0:01:18.308 **** 2025-09-27 22:18:12.896881 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:18:12.896890 | orchestrator | 2025-09-27 22:18:12.896899 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-27 22:18:12.896908 | orchestrator | Saturday 27 September 2025 22:16:12 +0000 (0:00:20.852) 0:01:39.160 **** 2025-09-27 22:18:12.896916 | orchestrator | 2025-09-27 22:18:12.896931 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-27 22:18:12.896941 | orchestrator | Saturday 27 September 2025 22:16:12 +0000 (0:00:00.132) 0:01:39.293 **** 2025-09-27 22:18:12.896950 | orchestrator | 2025-09-27 22:18:12.896959 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-27 22:18:12.896968 | orchestrator | Saturday 27 September 2025 22:16:13 +0000 (0:00:00.132) 0:01:39.425 **** 2025-09-27 22:18:12.896977 | orchestrator | 2025-09-27 22:18:12.896986 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-27 22:18:12.896996 | orchestrator | Saturday 27 September 2025 22:16:13 +0000 (0:00:00.132) 0:01:39.558 **** 2025-09-27 22:18:12.897005 | orchestrator | 2025-09-27 22:18:12.897013 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-27 22:18:12.897022 | orchestrator | Saturday 27 September 2025 22:16:13 +0000 (0:00:00.130) 0:01:39.688 **** 2025-09-27 22:18:12.897031 | orchestrator | 2025-09-27 22:18:12.897040 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-27 22:18:12.897049 | orchestrator | Saturday 27 September 2025 22:16:13 +0000 (0:00:00.188) 0:01:39.877 **** 2025-09-27 22:18:12.897057 | orchestrator | 2025-09-27 22:18:12.897066 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-27 22:18:12.897075 | orchestrator | Saturday 27 September 2025 22:16:13 +0000 (0:00:00.187) 0:01:40.064 **** 2025-09-27 22:18:12.897084 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:18:12.897093 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:18:12.897102 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:18:12.897111 | orchestrator | 2025-09-27 22:18:12.897120 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-27 22:18:12.897129 | orchestrator | Saturday 27 September 2025 22:16:40 +0000 (0:00:26.645) 0:02:06.710 **** 2025-09-27 22:18:12.897143 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:18:12.897152 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:18:12.897161 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:18:12.897170 | orchestrator | 2025-09-27 22:18:12.897179 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-27 22:18:12.897192 | orchestrator | Saturday 27 September 2025 22:16:46 +0000 (0:00:05.939) 0:02:12.649 **** 2025-09-27 22:18:12.897207 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:18:12.897221 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:18:12.897236 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:18:12.897252 | orchestrator | 2025-09-27 22:18:12.897267 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-27 22:18:12.897283 | orchestrator | Saturday 27 September 2025 22:17:59 +0000 (0:01:13.169) 0:03:25.819 **** 2025-09-27 22:18:12.897297 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:18:12.897312 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:18:12.897324 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:18:12.897332 | orchestrator | 2025-09-27 22:18:12.897342 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-27 22:18:12.897351 | orchestrator | Saturday 27 September 2025 22:18:09 +0000 (0:00:10.144) 0:03:35.964 **** 2025-09-27 22:18:12.897360 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:18:12.897369 | orchestrator | 2025-09-27 22:18:12.897380 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:18:12.897406 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-27 22:18:12.897424 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-27 22:18:12.897438 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-27 22:18:12.897455 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-27 22:18:12.897470 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-27 22:18:12.897485 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-27 22:18:12.897498 | orchestrator | 2025-09-27 22:18:12.897507 | orchestrator | 2025-09-27 22:18:12.897516 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:18:12.897525 | orchestrator | Saturday 27 September 2025 22:18:10 +0000 (0:00:00.519) 0:03:36.483 **** 2025-09-27 22:18:12.897534 | orchestrator | =============================================================================== 2025-09-27 22:18:12.897543 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 73.17s 2025-09-27 22:18:12.897551 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 26.64s 2025-09-27 22:18:12.897560 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.85s 2025-09-27 22:18:12.897569 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.14s 2025-09-27 22:18:12.897582 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.31s 2025-09-27 22:18:12.897596 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 6.81s 2025-09-27 22:18:12.897611 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.94s 2025-09-27 22:18:12.897626 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.87s 2025-09-27 22:18:12.897716 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.09s 2025-09-27 22:18:12.897729 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.50s 2025-09-27 22:18:12.897738 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.35s 2025-09-27 22:18:12.897746 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.12s 2025-09-27 22:18:12.897755 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.07s 2025-09-27 22:18:12.897764 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.98s 2025-09-27 22:18:12.897772 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 2.95s 2025-09-27 22:18:12.897781 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.78s 2025-09-27 22:18:12.897789 | orchestrator | cinder : Copying over config.json files for services -------------------- 2.68s 2025-09-27 22:18:12.897798 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.35s 2025-09-27 22:18:12.897807 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.35s 2025-09-27 22:18:12.897852 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.30s 2025-09-27 22:18:12.897865 | orchestrator | 2025-09-27 22:18:12 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:18:12.897877 | orchestrator | 2025-09-27 22:18:12 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:18:12.897911 | orchestrator | 2025-09-27 22:18:12 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:18:12.897946 | orchestrator | 2025-09-27 22:18:12 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:18:15.921387 | orchestrator | 2025-09-27 22:18:15 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state STARTED 2025-09-27 22:18:15.921642 | orchestrator | 2025-09-27 22:18:15 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:18:15.922250 | orchestrator | 2025-09-27 22:18:15 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:18:15.923004 | orchestrator | 2025-09-27 22:18:15 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:18:15.923033 | orchestrator | 2025-09-27 22:18:15 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:18:18.957483 | orchestrator | 2025-09-27 22:18:18 | INFO  | Task f8df45b2-4aeb-487f-9a19-de3606fdc58b is in state SUCCESS 2025-09-27 22:18:18.959401 | orchestrator | 2025-09-27 22:18:18.959454 | orchestrator | 2025-09-27 22:18:18.959461 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:18:18.959468 | orchestrator | 2025-09-27 22:18:18.959474 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:18:18.959480 | orchestrator | Saturday 27 September 2025 22:16:29 +0000 (0:00:00.313) 0:00:00.313 **** 2025-09-27 22:18:18.959486 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:18:18.959492 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:18:18.959498 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:18:18.959504 | orchestrator | 2025-09-27 22:18:18.959509 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:18:18.959514 | orchestrator | Saturday 27 September 2025 22:16:30 +0000 (0:00:00.760) 0:00:01.074 **** 2025-09-27 22:18:18.959520 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-27 22:18:18.959526 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-27 22:18:18.959531 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-27 22:18:18.959537 | orchestrator | 2025-09-27 22:18:18.959542 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-27 22:18:18.959547 | orchestrator | 2025-09-27 22:18:18.959552 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-27 22:18:18.959558 | orchestrator | Saturday 27 September 2025 22:16:31 +0000 (0:00:00.713) 0:00:01.787 **** 2025-09-27 22:18:18.959575 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:18:18.959587 | orchestrator | 2025-09-27 22:18:18.959593 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-27 22:18:18.959598 | orchestrator | Saturday 27 September 2025 22:16:31 +0000 (0:00:00.519) 0:00:02.307 **** 2025-09-27 22:18:18.959604 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-27 22:18:18.959609 | orchestrator | 2025-09-27 22:18:18.959614 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-27 22:18:18.959619 | orchestrator | Saturday 27 September 2025 22:16:35 +0000 (0:00:03.622) 0:00:05.929 **** 2025-09-27 22:18:18.959624 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-27 22:18:18.959630 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-27 22:18:18.959635 | orchestrator | 2025-09-27 22:18:18.959640 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-27 22:18:18.959645 | orchestrator | Saturday 27 September 2025 22:16:42 +0000 (0:00:06.754) 0:00:12.683 **** 2025-09-27 22:18:18.959650 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-27 22:18:18.959655 | orchestrator | 2025-09-27 22:18:18.959660 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-27 22:18:18.959687 | orchestrator | Saturday 27 September 2025 22:16:45 +0000 (0:00:03.461) 0:00:16.145 **** 2025-09-27 22:18:18.959692 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-27 22:18:18.959698 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-27 22:18:18.959703 | orchestrator | 2025-09-27 22:18:18.959708 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-27 22:18:18.959713 | orchestrator | Saturday 27 September 2025 22:16:49 +0000 (0:00:03.976) 0:00:20.121 **** 2025-09-27 22:18:18.959718 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-27 22:18:18.959724 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-27 22:18:18.959729 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-27 22:18:18.959734 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-27 22:18:18.959739 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-27 22:18:18.959745 | orchestrator | 2025-09-27 22:18:18.959750 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-27 22:18:18.959755 | orchestrator | Saturday 27 September 2025 22:17:05 +0000 (0:00:16.142) 0:00:36.264 **** 2025-09-27 22:18:18.959760 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-27 22:18:18.959766 | orchestrator | 2025-09-27 22:18:18.959771 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-27 22:18:18.959776 | orchestrator | Saturday 27 September 2025 22:17:10 +0000 (0:00:04.659) 0:00:40.923 **** 2025-09-27 22:18:18.959795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:18.959852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:18.959860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.959873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:18.959878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.959908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.959919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.959925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.959931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.959943 | orchestrator | 2025-09-27 22:18:18.959948 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-27 22:18:18.959954 | orchestrator | Saturday 27 September 2025 22:17:12 +0000 (0:00:02.135) 0:00:43.059 **** 2025-09-27 22:18:18.959959 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-27 22:18:18.959964 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-27 22:18:18.959969 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-27 22:18:18.959975 | orchestrator | 2025-09-27 22:18:18.959981 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-27 22:18:18.959986 | orchestrator | Saturday 27 September 2025 22:17:13 +0000 (0:00:01.124) 0:00:44.183 **** 2025-09-27 22:18:18.959992 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:18:18.959998 | orchestrator | 2025-09-27 22:18:18.960004 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-27 22:18:18.960009 | orchestrator | Saturday 27 September 2025 22:17:13 +0000 (0:00:00.120) 0:00:44.304 **** 2025-09-27 22:18:18.960015 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:18:18.960021 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:18:18.960027 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:18:18.960032 | orchestrator | 2025-09-27 22:18:18.960038 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-27 22:18:18.960043 | orchestrator | Saturday 27 September 2025 22:17:14 +0000 (0:00:00.682) 0:00:44.986 **** 2025-09-27 22:18:18.960049 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:18:18.960056 | orchestrator | 2025-09-27 22:18:18.960061 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-27 22:18:18.960067 | orchestrator | Saturday 27 September 2025 22:17:15 +0000 (0:00:01.167) 0:00:46.155 **** 2025-09-27 22:18:18.960077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:18.960089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:18.960095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:18.960107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960166 | orchestrator | 2025-09-27 22:18:18.960175 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-27 22:18:18.960182 | orchestrator | Saturday 27 September 2025 22:17:18 +0000 (0:00:03.429) 0:00:49.585 **** 2025-09-27 22:18:18.960191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 22:18:18.960200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 22:18:18.960211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:18:18.960221 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:18:18.960238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 22:18:18.960253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 22:18:18.960261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:18:18.960270 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:18:18.960277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 22:18:18.960291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 22:18:18.960304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:18:18.960310 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:18:18.960322 | orchestrator | 2025-09-27 22:18:18.960327 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-27 22:18:18.960332 | orchestrator | Saturday 27 September 2025 22:17:19 +0000 (0:00:00.812) 0:00:50.397 **** 2025-09-27 22:18:18.960343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 22:18:18.960353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 22:18:18.960358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:18:18.960364 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:18:18.960369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 22:18:18.960378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 22:18:18.960387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:18:18.960396 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:18:18.960406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 22:18:18.960412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 22:18:18.960417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:18:18.960423 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:18:18.960439 | orchestrator | 2025-09-27 22:18:18.960444 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-27 22:18:18.960450 | orchestrator | Saturday 27 September 2025 22:17:20 +0000 (0:00:00.720) 0:00:51.118 **** 2025-09-27 22:18:18.960455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:18.960467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:18.960479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:18.960488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960561 | orchestrator | 2025-09-27 22:18:18.960569 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-27 22:18:18.960579 | orchestrator | Saturday 27 September 2025 22:17:23 +0000 (0:00:03.187) 0:00:54.305 **** 2025-09-27 22:18:18.960588 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:18:18.960596 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:18:18.960604 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:18:18.960609 | orchestrator | 2025-09-27 22:18:18.960614 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-27 22:18:18.960619 | orchestrator | Saturday 27 September 2025 22:17:25 +0000 (0:00:01.869) 0:00:56.175 **** 2025-09-27 22:18:18.960624 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 22:18:18.960629 | orchestrator | 2025-09-27 22:18:18.960634 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-27 22:18:18.960639 | orchestrator | Saturday 27 September 2025 22:17:26 +0000 (0:00:00.832) 0:00:57.007 **** 2025-09-27 22:18:18.960644 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:18:18.960649 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:18:18.960654 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:18:18.960659 | orchestrator | 2025-09-27 22:18:18.960664 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-27 22:18:18.960669 | orchestrator | Saturday 27 September 2025 22:17:26 +0000 (0:00:00.545) 0:00:57.553 **** 2025-09-27 22:18:18.960675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:18.960683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:18.960697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:18.960703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960742 | orchestrator | 2025-09-27 22:18:18.960747 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-27 22:18:18.960752 | orchestrator | Saturday 27 September 2025 22:17:33 +0000 (0:00:06.186) 0:01:03.740 **** 2025-09-27 22:18:18.960761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 22:18:18.960766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 22:18:18.960772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:18:18.960781 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:18:18.960787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 22:18:18.960795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 22:18:18.960804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:18:18.960829 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:18:18.960836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 22:18:18.960841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 22:18:18.960847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:18:18.960855 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:18:18.960860 | orchestrator | 2025-09-27 22:18:18.960865 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-27 22:18:18.960870 | orchestrator | Saturday 27 September 2025 22:17:33 +0000 (0:00:00.507) 0:01:04.248 **** 2025-09-27 22:18:18.960881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:18.960892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:18.960898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 22:18:18.960903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:18:18.960946 | orchestrator | 2025-09-27 22:18:18.960952 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-27 22:18:18.960957 | orchestrator | Saturday 27 September 2025 22:17:36 +0000 (0:00:02.809) 0:01:07.057 **** 2025-09-27 22:18:18.960962 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:18:18.960967 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:18:18.960972 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:18:18.960977 | orchestrator | 2025-09-27 22:18:18.960982 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-27 22:18:18.960991 | orchestrator | Saturday 27 September 2025 22:17:36 +0000 (0:00:00.274) 0:01:07.332 **** 2025-09-27 22:18:18.960996 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:18:18.961003 | orchestrator | 2025-09-27 22:18:18.961011 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-27 22:18:18.961019 | orchestrator | Saturday 27 September 2025 22:17:38 +0000 (0:00:02.101) 0:01:09.434 **** 2025-09-27 22:18:18.961026 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:18:18.961034 | orchestrator | 2025-09-27 22:18:18.961041 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-27 22:18:18.961049 | orchestrator | Saturday 27 September 2025 22:17:41 +0000 (0:00:02.482) 0:01:11.916 **** 2025-09-27 22:18:18.961067 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:18:18.961076 | orchestrator | 2025-09-27 22:18:18.961085 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-27 22:18:18.961093 | orchestrator | Saturday 27 September 2025 22:17:56 +0000 (0:00:14.731) 0:01:26.647 **** 2025-09-27 22:18:18.961101 | orchestrator | 2025-09-27 22:18:18.961109 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-27 22:18:18.961118 | orchestrator | Saturday 27 September 2025 22:17:56 +0000 (0:00:00.059) 0:01:26.707 **** 2025-09-27 22:18:18.961126 | orchestrator | 2025-09-27 22:18:18.961134 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-27 22:18:18.961143 | orchestrator | Saturday 27 September 2025 22:17:56 +0000 (0:00:00.060) 0:01:26.768 **** 2025-09-27 22:18:18.961149 | orchestrator | 2025-09-27 22:18:18.961154 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-27 22:18:18.961159 | orchestrator | Saturday 27 September 2025 22:17:56 +0000 (0:00:00.061) 0:01:26.830 **** 2025-09-27 22:18:18.961164 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:18:18.961169 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:18:18.961174 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:18:18.961179 | orchestrator | 2025-09-27 22:18:18.961185 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-27 22:18:18.961193 | orchestrator | Saturday 27 September 2025 22:18:07 +0000 (0:00:11.161) 0:01:37.991 **** 2025-09-27 22:18:18.961201 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:18:18.961209 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:18:18.961218 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:18:18.961227 | orchestrator | 2025-09-27 22:18:18.961236 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-27 22:18:18.961244 | orchestrator | Saturday 27 September 2025 22:18:11 +0000 (0:00:04.531) 0:01:42.523 **** 2025-09-27 22:18:18.961251 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:18:18.961260 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:18:18.961269 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:18:18.961277 | orchestrator | 2025-09-27 22:18:18.961286 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:18:18.961298 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-27 22:18:18.961304 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 22:18:18.961310 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 22:18:18.961315 | orchestrator | 2025-09-27 22:18:18.961320 | orchestrator | 2025-09-27 22:18:18.961325 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:18:18.961330 | orchestrator | Saturday 27 September 2025 22:18:18 +0000 (0:00:06.226) 0:01:48.750 **** 2025-09-27 22:18:18.961335 | orchestrator | =============================================================================== 2025-09-27 22:18:18.961340 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.14s 2025-09-27 22:18:18.961355 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 14.73s 2025-09-27 22:18:18.961361 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.16s 2025-09-27 22:18:18.961366 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.75s 2025-09-27 22:18:18.961371 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 6.23s 2025-09-27 22:18:18.961376 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.19s 2025-09-27 22:18:18.961381 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.66s 2025-09-27 22:18:18.961386 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 4.53s 2025-09-27 22:18:18.961391 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.98s 2025-09-27 22:18:18.961397 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.62s 2025-09-27 22:18:18.961402 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.46s 2025-09-27 22:18:18.961407 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.43s 2025-09-27 22:18:18.961412 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.19s 2025-09-27 22:18:18.961417 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.81s 2025-09-27 22:18:18.961422 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.48s 2025-09-27 22:18:18.961427 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.13s 2025-09-27 22:18:18.961432 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.10s 2025-09-27 22:18:18.961437 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.87s 2025-09-27 22:18:18.961442 | orchestrator | barbican : include_tasks ------------------------------------------------ 1.17s 2025-09-27 22:18:18.961447 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.12s 2025-09-27 22:18:18.961452 | orchestrator | 2025-09-27 22:18:18 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:18:18.963650 | orchestrator | 2025-09-27 22:18:18 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:18:18.963742 | orchestrator | 2025-09-27 22:18:18 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:18:18.963756 | orchestrator | 2025-09-27 22:18:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:18:22.009670 | orchestrator | 2025-09-27 22:18:22 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:18:22.014554 | orchestrator | 2025-09-27 22:18:22 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:18:22.017364 | orchestrator | 2025-09-27 22:18:22 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:18:22.019574 | orchestrator | 2025-09-27 22:18:22 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:18:22.019629 | orchestrator | 2025-09-27 22:18:22 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:18:25.056335 | orchestrator | 2025-09-27 22:18:25 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:18:25.058403 | orchestrator | 2025-09-27 22:18:25 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:18:25.059554 | orchestrator | 2025-09-27 22:18:25 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:18:25.060879 | orchestrator | 2025-09-27 22:18:25 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:18:25.061059 | orchestrator | 2025-09-27 22:18:25 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:18:28.104869 | orchestrator | 2025-09-27 22:18:28 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:18:28.105338 | orchestrator | 2025-09-27 22:18:28 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:18:28.106348 | orchestrator | 2025-09-27 22:18:28 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:18:28.107084 | orchestrator | 2025-09-27 22:18:28 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:18:28.108526 | orchestrator | 2025-09-27 22:18:28 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:18:31.149626 | orchestrator | 2025-09-27 22:18:31 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:18:31.152610 | orchestrator | 2025-09-27 22:18:31 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:18:31.154274 | orchestrator | 2025-09-27 22:18:31 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:18:31.155899 | orchestrator | 2025-09-27 22:18:31 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:18:31.155931 | orchestrator | 2025-09-27 22:18:31 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:18:34.201351 | orchestrator | 2025-09-27 22:18:34 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:18:34.202152 | orchestrator | 2025-09-27 22:18:34 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:18:34.203353 | orchestrator | 2025-09-27 22:18:34 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:18:34.204787 | orchestrator | 2025-09-27 22:18:34 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:18:34.204927 | orchestrator | 2025-09-27 22:18:34 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:18:37.253045 | orchestrator | 2025-09-27 22:18:37 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:18:37.254590 | orchestrator | 2025-09-27 22:18:37 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:18:37.255928 | orchestrator | 2025-09-27 22:18:37 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:18:37.257817 | orchestrator | 2025-09-27 22:18:37 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:18:37.258179 | orchestrator | 2025-09-27 22:18:37 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:18:40.306773 | orchestrator | 2025-09-27 22:18:40 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:18:40.308504 | orchestrator | 2025-09-27 22:18:40 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:18:40.309645 | orchestrator | 2025-09-27 22:18:40 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:18:40.312412 | orchestrator | 2025-09-27 22:18:40 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:18:40.312461 | orchestrator | 2025-09-27 22:18:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:18:43.343909 | orchestrator | 2025-09-27 22:18:43 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:18:43.344252 | orchestrator | 2025-09-27 22:18:43 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:18:43.345065 | orchestrator | 2025-09-27 22:18:43 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:18:43.345776 | orchestrator | 2025-09-27 22:18:43 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:18:43.345857 | orchestrator | 2025-09-27 22:18:43 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:18:46.366864 | orchestrator | 2025-09-27 22:18:46 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:18:46.367505 | orchestrator | 2025-09-27 22:18:46 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:18:46.367665 | orchestrator | 2025-09-27 22:18:46 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:18:46.368365 | orchestrator | 2025-09-27 22:18:46 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:18:46.368404 | orchestrator | 2025-09-27 22:18:46 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:18:49.406908 | orchestrator | 2025-09-27 22:18:49 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:18:49.408747 | orchestrator | 2025-09-27 22:18:49 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:18:49.410249 | orchestrator | 2025-09-27 22:18:49 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:18:49.411764 | orchestrator | 2025-09-27 22:18:49 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:18:49.411813 | orchestrator | 2025-09-27 22:18:49 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:18:52.448057 | orchestrator | 2025-09-27 22:18:52 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:18:52.450968 | orchestrator | 2025-09-27 22:18:52 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:18:52.454188 | orchestrator | 2025-09-27 22:18:52 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:18:52.456891 | orchestrator | 2025-09-27 22:18:52 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:18:52.457146 | orchestrator | 2025-09-27 22:18:52 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:18:55.500496 | orchestrator | 2025-09-27 22:18:55 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:18:55.501335 | orchestrator | 2025-09-27 22:18:55 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:18:55.502367 | orchestrator | 2025-09-27 22:18:55 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:18:55.503919 | orchestrator | 2025-09-27 22:18:55 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:18:55.503964 | orchestrator | 2025-09-27 22:18:55 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:18:58.545163 | orchestrator | 2025-09-27 22:18:58 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:18:58.546588 | orchestrator | 2025-09-27 22:18:58 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:18:58.548583 | orchestrator | 2025-09-27 22:18:58 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:18:58.551965 | orchestrator | 2025-09-27 22:18:58 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:18:58.552130 | orchestrator | 2025-09-27 22:18:58 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:19:01.590355 | orchestrator | 2025-09-27 22:19:01 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:19:01.591094 | orchestrator | 2025-09-27 22:19:01 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:19:01.593001 | orchestrator | 2025-09-27 22:19:01 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state STARTED 2025-09-27 22:19:01.595961 | orchestrator | 2025-09-27 22:19:01 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:19:01.596121 | orchestrator | 2025-09-27 22:19:01 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:19:04.631102 | orchestrator | 2025-09-27 22:19:04 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:19:04.632512 | orchestrator | 2025-09-27 22:19:04 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:19:04.636610 | orchestrator | 2025-09-27 22:19:04 | INFO  | Task 5b2cbff3-af4b-45ec-8106-b2bce72adaf4 is in state SUCCESS 2025-09-27 22:19:04.640533 | orchestrator | 2025-09-27 22:19:04.640604 | orchestrator | 2025-09-27 22:19:04.640618 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:19:04.640628 | orchestrator | 2025-09-27 22:19:04.640638 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:19:04.640647 | orchestrator | Saturday 27 September 2025 22:15:18 +0000 (0:00:00.468) 0:00:00.468 **** 2025-09-27 22:19:04.640656 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:19:04.640668 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:19:04.640677 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:19:04.640687 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:19:04.640696 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:19:04.640706 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:19:04.640715 | orchestrator | 2025-09-27 22:19:04.640723 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:19:04.640731 | orchestrator | Saturday 27 September 2025 22:15:18 +0000 (0:00:00.619) 0:00:01.087 **** 2025-09-27 22:19:04.640739 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-27 22:19:04.640748 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-27 22:19:04.640757 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-27 22:19:04.640765 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-27 22:19:04.640774 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-27 22:19:04.640828 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-27 22:19:04.640992 | orchestrator | 2025-09-27 22:19:04.641008 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-27 22:19:04.641018 | orchestrator | 2025-09-27 22:19:04.641044 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-27 22:19:04.641054 | orchestrator | Saturday 27 September 2025 22:15:19 +0000 (0:00:00.568) 0:00:01.656 **** 2025-09-27 22:19:04.641066 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:19:04.641077 | orchestrator | 2025-09-27 22:19:04.641087 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-27 22:19:04.641096 | orchestrator | Saturday 27 September 2025 22:15:20 +0000 (0:00:01.057) 0:00:02.714 **** 2025-09-27 22:19:04.641106 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:19:04.641116 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:19:04.641125 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:19:04.641134 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:19:04.641144 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:19:04.641153 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:19:04.641162 | orchestrator | 2025-09-27 22:19:04.641172 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-27 22:19:04.641181 | orchestrator | Saturday 27 September 2025 22:15:21 +0000 (0:00:01.157) 0:00:03.871 **** 2025-09-27 22:19:04.641192 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:19:04.641201 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:19:04.641232 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:19:04.641242 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:19:04.641251 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:19:04.641260 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:19:04.641270 | orchestrator | 2025-09-27 22:19:04.641279 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-27 22:19:04.641289 | orchestrator | Saturday 27 September 2025 22:15:22 +0000 (0:00:01.112) 0:00:04.983 **** 2025-09-27 22:19:04.641298 | orchestrator | ok: [testbed-node-0] => { 2025-09-27 22:19:04.641309 | orchestrator |  "changed": false, 2025-09-27 22:19:04.641317 | orchestrator |  "msg": "All assertions passed" 2025-09-27 22:19:04.641327 | orchestrator | } 2025-09-27 22:19:04.641348 | orchestrator | ok: [testbed-node-1] => { 2025-09-27 22:19:04.641359 | orchestrator |  "changed": false, 2025-09-27 22:19:04.641368 | orchestrator |  "msg": "All assertions passed" 2025-09-27 22:19:04.641378 | orchestrator | } 2025-09-27 22:19:04.641387 | orchestrator | ok: [testbed-node-2] => { 2025-09-27 22:19:04.641396 | orchestrator |  "changed": false, 2025-09-27 22:19:04.641405 | orchestrator |  "msg": "All assertions passed" 2025-09-27 22:19:04.641413 | orchestrator | } 2025-09-27 22:19:04.641421 | orchestrator | ok: [testbed-node-3] => { 2025-09-27 22:19:04.641429 | orchestrator |  "changed": false, 2025-09-27 22:19:04.641438 | orchestrator |  "msg": "All assertions passed" 2025-09-27 22:19:04.641447 | orchestrator | } 2025-09-27 22:19:04.641456 | orchestrator | ok: [testbed-node-4] => { 2025-09-27 22:19:04.641465 | orchestrator |  "changed": false, 2025-09-27 22:19:04.641474 | orchestrator |  "msg": "All assertions passed" 2025-09-27 22:19:04.641484 | orchestrator | } 2025-09-27 22:19:04.641493 | orchestrator | ok: [testbed-node-5] => { 2025-09-27 22:19:04.641502 | orchestrator |  "changed": false, 2025-09-27 22:19:04.641511 | orchestrator |  "msg": "All assertions passed" 2025-09-27 22:19:04.641520 | orchestrator | } 2025-09-27 22:19:04.641530 | orchestrator | 2025-09-27 22:19:04.641539 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-27 22:19:04.641549 | orchestrator | Saturday 27 September 2025 22:15:23 +0000 (0:00:00.981) 0:00:05.965 **** 2025-09-27 22:19:04.641558 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.641567 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.641576 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.641585 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.641594 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.641604 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.641613 | orchestrator | 2025-09-27 22:19:04.641622 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-27 22:19:04.641631 | orchestrator | Saturday 27 September 2025 22:15:24 +0000 (0:00:00.569) 0:00:06.535 **** 2025-09-27 22:19:04.641641 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-27 22:19:04.641650 | orchestrator | 2025-09-27 22:19:04.641659 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-27 22:19:04.641669 | orchestrator | Saturday 27 September 2025 22:15:27 +0000 (0:00:03.737) 0:00:10.273 **** 2025-09-27 22:19:04.641678 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-27 22:19:04.641689 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-27 22:19:04.641698 | orchestrator | 2025-09-27 22:19:04.641725 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-27 22:19:04.641735 | orchestrator | Saturday 27 September 2025 22:15:34 +0000 (0:00:06.400) 0:00:16.673 **** 2025-09-27 22:19:04.641745 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-27 22:19:04.641754 | orchestrator | 2025-09-27 22:19:04.641763 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-27 22:19:04.641772 | orchestrator | Saturday 27 September 2025 22:15:37 +0000 (0:00:02.994) 0:00:19.668 **** 2025-09-27 22:19:04.641812 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-27 22:19:04.641822 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-27 22:19:04.641830 | orchestrator | 2025-09-27 22:19:04.641838 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-27 22:19:04.641846 | orchestrator | Saturday 27 September 2025 22:15:40 +0000 (0:00:03.541) 0:00:23.209 **** 2025-09-27 22:19:04.641854 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-27 22:19:04.641864 | orchestrator | 2025-09-27 22:19:04.641873 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-27 22:19:04.641882 | orchestrator | Saturday 27 September 2025 22:15:43 +0000 (0:00:03.015) 0:00:26.225 **** 2025-09-27 22:19:04.641891 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-27 22:19:04.641899 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-27 22:19:04.641909 | orchestrator | 2025-09-27 22:19:04.641918 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-27 22:19:04.641927 | orchestrator | Saturday 27 September 2025 22:15:51 +0000 (0:00:07.326) 0:00:33.552 **** 2025-09-27 22:19:04.641942 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.641952 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.641962 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.641970 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.641978 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.641987 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.641996 | orchestrator | 2025-09-27 22:19:04.642004 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-27 22:19:04.642013 | orchestrator | Saturday 27 September 2025 22:15:51 +0000 (0:00:00.624) 0:00:34.176 **** 2025-09-27 22:19:04.642070 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.642079 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.642088 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.642097 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.642106 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.642116 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.642125 | orchestrator | 2025-09-27 22:19:04.642135 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-27 22:19:04.642144 | orchestrator | Saturday 27 September 2025 22:15:53 +0000 (0:00:02.000) 0:00:36.177 **** 2025-09-27 22:19:04.642154 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:19:04.642164 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:19:04.642173 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:19:04.642183 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:19:04.642192 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:19:04.642202 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:19:04.642211 | orchestrator | 2025-09-27 22:19:04.642221 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-27 22:19:04.642231 | orchestrator | Saturday 27 September 2025 22:15:55 +0000 (0:00:01.531) 0:00:37.709 **** 2025-09-27 22:19:04.642240 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.642250 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.642259 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.642269 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.642279 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.642288 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.642298 | orchestrator | 2025-09-27 22:19:04.642307 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-27 22:19:04.642317 | orchestrator | Saturday 27 September 2025 22:15:58 +0000 (0:00:02.882) 0:00:40.591 **** 2025-09-27 22:19:04.642330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 22:19:04.642365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 22:19:04.642381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 22:19:04.642392 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 22:19:04.642402 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 22:19:04.642412 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 22:19:04.642426 | orchestrator | 2025-09-27 22:19:04.642435 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-27 22:19:04.642444 | orchestrator | Saturday 27 September 2025 22:16:00 +0000 (0:00:02.179) 0:00:42.771 **** 2025-09-27 22:19:04.642453 | orchestrator | [WARNING]: Skipped 2025-09-27 22:19:04.642462 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-27 22:19:04.642472 | orchestrator | due to this access issue: 2025-09-27 22:19:04.642481 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-27 22:19:04.642490 | orchestrator | a directory 2025-09-27 22:19:04.642498 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 22:19:04.642507 | orchestrator | 2025-09-27 22:19:04.642516 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-27 22:19:04.642539 | orchestrator | Saturday 27 September 2025 22:16:01 +0000 (0:00:00.760) 0:00:43.531 **** 2025-09-27 22:19:04.642549 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:19:04.642559 | orchestrator | 2025-09-27 22:19:04.642568 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-27 22:19:04.642577 | orchestrator | Saturday 27 September 2025 22:16:02 +0000 (0:00:01.022) 0:00:44.553 **** 2025-09-27 22:19:04.642590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 22:19:04.642601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 22:19:04.642611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 22:19:04.642627 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 22:19:04.642644 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 22:19:04.642657 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 22:19:04.642666 | orchestrator | 2025-09-27 22:19:04.642676 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-27 22:19:04.642685 | orchestrator | Saturday 27 September 2025 22:16:04 +0000 (0:00:02.637) 0:00:47.191 **** 2025-09-27 22:19:04.642718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:19:04.642735 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.642745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:19:04.642754 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.642763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:19:04.642807 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.642818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.642827 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.642839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.642847 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.642856 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.642880 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.642890 | orchestrator | 2025-09-27 22:19:04.642899 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-27 22:19:04.642908 | orchestrator | Saturday 27 September 2025 22:16:06 +0000 (0:00:02.089) 0:00:49.280 **** 2025-09-27 22:19:04.642917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:19:04.642926 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.642943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:19:04.642952 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.642966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:19:04.642975 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.642985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.643000 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.643010 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.643018 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.643028 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.643037 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.643045 | orchestrator | 2025-09-27 22:19:04.643054 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-27 22:19:04.643063 | orchestrator | Saturday 27 September 2025 22:16:09 +0000 (0:00:02.843) 0:00:52.124 **** 2025-09-27 22:19:04.643072 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.643081 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.643090 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.643099 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.643108 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.643116 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.643125 | orchestrator | 2025-09-27 22:19:04.643134 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-27 22:19:04.643148 | orchestrator | Saturday 27 September 2025 22:16:11 +0000 (0:00:01.909) 0:00:54.034 **** 2025-09-27 22:19:04.643158 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.643167 | orchestrator | 2025-09-27 22:19:04.643176 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-27 22:19:04.643186 | orchestrator | Saturday 27 September 2025 22:16:11 +0000 (0:00:00.126) 0:00:54.160 **** 2025-09-27 22:19:04.643195 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.643204 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.643213 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.643221 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.643229 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.643238 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.643247 | orchestrator | 2025-09-27 22:19:04.643256 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-27 22:19:04.643265 | orchestrator | Saturday 27 September 2025 22:16:12 +0000 (0:00:00.546) 0:00:54.707 **** 2025-09-27 22:19:04.643286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:19:04.643297 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.643306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:19:04.643315 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.643324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:19:04.643334 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.643349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_po2025-09-27 22:19:04 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:19:04.643359 | orchestrator | 2025-09-27 22:19:04 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:19:04.643368 | orchestrator | rt neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.643386 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.643399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.643409 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.643418 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.643427 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.643436 | orchestrator | 2025-09-27 22:19:04.643445 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-27 22:19:04.643454 | orchestrator | Saturday 27 September 2025 22:16:15 +0000 (0:00:02.928) 0:00:57.636 **** 2025-09-27 22:19:04.643464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 22:19:04.643473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 22:19:04.643489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 22:19:04.643508 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 22:19:04.643517 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 22:19:04.643527 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 22:19:04.643535 | orchestrator | 2025-09-27 22:19:04.643544 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-27 22:19:04.643553 | orchestrator | Saturday 27 September 2025 22:16:18 +0000 (0:00:03.597) 0:01:01.233 **** 2025-09-27 22:19:04.643563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 22:19:04.643582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 22:19:04.643595 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 22:19:04.643604 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 22:19:04.643613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 22:19:04.643623 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 22:19:04.643639 | orchestrator | 2025-09-27 22:19:04.643648 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-27 22:19:04.643656 | orchestrator | Saturday 27 September 2025 22:16:23 +0000 (0:00:04.376) 0:01:05.610 **** 2025-09-27 22:19:04.643670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:19:04.643684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:19:04.643693 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.643701 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.643710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.643719 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.643727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.643736 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.643749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:19:04.643765 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.643773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.643829 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.643838 | orchestrator | 2025-09-27 22:19:04.643845 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-27 22:19:04.643854 | orchestrator | Saturday 27 September 2025 22:16:25 +0000 (0:00:02.098) 0:01:07.709 **** 2025-09-27 22:19:04.643869 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.643878 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.643887 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:19:04.643895 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.643903 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:19:04.643911 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:19:04.643918 | orchestrator | 2025-09-27 22:19:04.643926 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-27 22:19:04.643935 | orchestrator | Saturday 27 September 2025 22:16:28 +0000 (0:00:02.848) 0:01:10.557 **** 2025-09-27 22:19:04.643944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.643953 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.643962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.643983 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.643992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.644001 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.644018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 22:19:04.644032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 22:19:04.644041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 22:19:04.644050 | orchestrator | 2025-09-27 22:19:04.644059 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-27 22:19:04.644068 | orchestrator | Saturday 27 September 2025 22:16:32 +0000 (0:00:04.085) 0:01:14.643 **** 2025-09-27 22:19:04.644083 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.644092 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.644101 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.644110 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.644118 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.644127 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.644136 | orchestrator | 2025-09-27 22:19:04.644144 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-27 22:19:04.644153 | orchestrator | Saturday 27 September 2025 22:16:34 +0000 (0:00:01.826) 0:01:16.469 **** 2025-09-27 22:19:04.644162 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.644170 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.644179 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.644188 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.644197 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.644205 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.644214 | orchestrator | 2025-09-27 22:19:04.644223 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-27 22:19:04.644231 | orchestrator | Saturday 27 September 2025 22:16:35 +0000 (0:00:01.854) 0:01:18.324 **** 2025-09-27 22:19:04.644240 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.644249 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.644258 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.644266 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.644275 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.644284 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.644292 | orchestrator | 2025-09-27 22:19:04.644301 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-27 22:19:04.644310 | orchestrator | Saturday 27 September 2025 22:16:37 +0000 (0:00:02.010) 0:01:20.334 **** 2025-09-27 22:19:04.644318 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.644327 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.644336 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.644345 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.644353 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.644361 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.644370 | orchestrator | 2025-09-27 22:19:04.644379 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-27 22:19:04.644394 | orchestrator | Saturday 27 September 2025 22:16:39 +0000 (0:00:01.890) 0:01:22.224 **** 2025-09-27 22:19:04.644404 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.644413 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.644422 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.644430 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.644439 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.644447 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.644455 | orchestrator | 2025-09-27 22:19:04.644465 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-27 22:19:04.644473 | orchestrator | Saturday 27 September 2025 22:16:42 +0000 (0:00:02.560) 0:01:24.785 **** 2025-09-27 22:19:04.644482 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.644491 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.644500 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.644508 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.644517 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.644525 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.644534 | orchestrator | 2025-09-27 22:19:04.644543 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-27 22:19:04.644552 | orchestrator | Saturday 27 September 2025 22:16:44 +0000 (0:00:01.908) 0:01:26.694 **** 2025-09-27 22:19:04.644561 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-27 22:19:04.644570 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.644586 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-27 22:19:04.644595 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.644608 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-27 22:19:04.644617 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.644626 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-27 22:19:04.644634 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.644643 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-27 22:19:04.644651 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.644661 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-27 22:19:04.644670 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.644679 | orchestrator | 2025-09-27 22:19:04.644688 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-27 22:19:04.644696 | orchestrator | Saturday 27 September 2025 22:16:46 +0000 (0:00:02.054) 0:01:28.748 **** 2025-09-27 22:19:04.644705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:19:04.644714 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.644723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:19:04.644732 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.644748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:19:04.644763 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.644822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.644835 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.644843 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.644851 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.644860 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.644869 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.644877 | orchestrator | 2025-09-27 22:19:04.644885 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-27 22:19:04.644895 | orchestrator | Saturday 27 September 2025 22:16:49 +0000 (0:00:02.797) 0:01:31.546 **** 2025-09-27 22:19:04.644904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:19:04.644913 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.644929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:19:04.644946 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.644958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:19:04.644968 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.644977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.644986 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.644994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.645003 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.645012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.645028 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.645037 | orchestrator | 2025-09-27 22:19:04.645051 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-27 22:19:04.645060 | orchestrator | Saturday 27 September 2025 22:16:51 +0000 (0:00:02.308) 0:01:33.854 **** 2025-09-27 22:19:04.645068 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.645076 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.645083 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.645091 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.645098 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.645106 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.645115 | orchestrator | 2025-09-27 22:19:04.645123 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-27 22:19:04.645131 | orchestrator | Saturday 27 September 2025 22:16:54 +0000 (0:00:02.725) 0:01:36.580 **** 2025-09-27 22:19:04.645140 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.645148 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.645157 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.645165 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:19:04.645173 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:19:04.645182 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:19:04.645190 | orchestrator | 2025-09-27 22:19:04.645198 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-27 22:19:04.645206 | orchestrator | Saturday 27 September 2025 22:16:58 +0000 (0:00:03.863) 0:01:40.444 **** 2025-09-27 22:19:04.645215 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.645223 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.645232 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.645240 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.645248 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.645261 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.645270 | orchestrator | 2025-09-27 22:19:04.645278 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-27 22:19:04.645287 | orchestrator | Saturday 27 September 2025 22:16:59 +0000 (0:00:01.845) 0:01:42.290 **** 2025-09-27 22:19:04.645296 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.645304 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.645313 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.645322 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.645330 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.645338 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.645346 | orchestrator | 2025-09-27 22:19:04.645355 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-27 22:19:04.645363 | orchestrator | Saturday 27 September 2025 22:17:01 +0000 (0:00:01.940) 0:01:44.230 **** 2025-09-27 22:19:04.645371 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.645379 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.645387 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.645395 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.645404 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.645412 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.645420 | orchestrator | 2025-09-27 22:19:04.645429 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-27 22:19:04.645437 | orchestrator | Saturday 27 September 2025 22:17:03 +0000 (0:00:01.704) 0:01:45.935 **** 2025-09-27 22:19:04.645445 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.645453 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.645462 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.645470 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.645479 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.645488 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.645503 | orchestrator | 2025-09-27 22:19:04.645511 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-27 22:19:04.645519 | orchestrator | Saturday 27 September 2025 22:17:05 +0000 (0:00:01.857) 0:01:47.793 **** 2025-09-27 22:19:04.645528 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.645537 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.645546 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.645554 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.645562 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.645571 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.645579 | orchestrator | 2025-09-27 22:19:04.645588 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-27 22:19:04.645596 | orchestrator | Saturday 27 September 2025 22:17:07 +0000 (0:00:01.830) 0:01:49.623 **** 2025-09-27 22:19:04.645604 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.645612 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.645621 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.645630 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.645639 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.645647 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.645656 | orchestrator | 2025-09-27 22:19:04.645665 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-27 22:19:04.645673 | orchestrator | Saturday 27 September 2025 22:17:09 +0000 (0:00:02.117) 0:01:51.741 **** 2025-09-27 22:19:04.645682 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.645690 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.645699 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.645708 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.645716 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.645724 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.645733 | orchestrator | 2025-09-27 22:19:04.645741 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-27 22:19:04.645749 | orchestrator | Saturday 27 September 2025 22:17:11 +0000 (0:00:02.123) 0:01:53.864 **** 2025-09-27 22:19:04.645757 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-27 22:19:04.645766 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.645796 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-27 22:19:04.645806 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.645815 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-27 22:19:04.645824 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.645839 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-27 22:19:04.645848 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.645856 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-27 22:19:04.645865 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.645874 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-27 22:19:04.645883 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.645892 | orchestrator | 2025-09-27 22:19:04.645901 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-27 22:19:04.645909 | orchestrator | Saturday 27 September 2025 22:17:13 +0000 (0:00:01.983) 0:01:55.848 **** 2025-09-27 22:19:04.645923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:19:04.645939 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.645948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.645957 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.645965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:19:04.645974 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.646087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 22:19:04.646105 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.646113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.646127 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.646146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 22:19:04.646156 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.646165 | orchestrator | 2025-09-27 22:19:04.646174 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-27 22:19:04.646183 | orchestrator | Saturday 27 September 2025 22:17:15 +0000 (0:00:02.474) 0:01:58.322 **** 2025-09-27 22:19:04.646192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 22:19:04.646201 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 22:19:04.646219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 22:19:04.646231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 22:19:04.646251 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 22:19:04.646261 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 22:19:04.646270 | orchestrator | 2025-09-27 22:19:04.646280 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-27 22:19:04.646289 | orchestrator | Saturday 27 September 2025 22:17:18 +0000 (0:00:02.570) 0:02:00.893 **** 2025-09-27 22:19:04.646297 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:19:04.646306 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:19:04.646315 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:19:04.646324 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:19:04.646332 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:19:04.646342 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:19:04.646350 | orchestrator | 2025-09-27 22:19:04.646358 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-27 22:19:04.646367 | orchestrator | Saturday 27 September 2025 22:17:19 +0000 (0:00:00.540) 0:02:01.433 **** 2025-09-27 22:19:04.646376 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:19:04.646384 | orchestrator | 2025-09-27 22:19:04.646393 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-27 22:19:04.646401 | orchestrator | Saturday 27 September 2025 22:17:21 +0000 (0:00:02.214) 0:02:03.647 **** 2025-09-27 22:19:04.646410 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:19:04.646418 | orchestrator | 2025-09-27 22:19:04.646427 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-27 22:19:04.646435 | orchestrator | Saturday 27 September 2025 22:17:23 +0000 (0:00:02.601) 0:02:06.249 **** 2025-09-27 22:19:04.646443 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:19:04.646451 | orchestrator | 2025-09-27 22:19:04.646460 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-27 22:19:04.646468 | orchestrator | Saturday 27 September 2025 22:18:10 +0000 (0:00:46.713) 0:02:52.963 **** 2025-09-27 22:19:04.646477 | orchestrator | 2025-09-27 22:19:04.646486 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-27 22:19:04.646501 | orchestrator | Saturday 27 September 2025 22:18:10 +0000 (0:00:00.063) 0:02:53.026 **** 2025-09-27 22:19:04.646510 | orchestrator | 2025-09-27 22:19:04.646524 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-27 22:19:04.646532 | orchestrator | Saturday 27 September 2025 22:18:10 +0000 (0:00:00.168) 0:02:53.195 **** 2025-09-27 22:19:04.646540 | orchestrator | 2025-09-27 22:19:04.646549 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-27 22:19:04.646556 | orchestrator | Saturday 27 September 2025 22:18:10 +0000 (0:00:00.061) 0:02:53.256 **** 2025-09-27 22:19:04.646564 | orchestrator | 2025-09-27 22:19:04.646573 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-27 22:19:04.646581 | orchestrator | Saturday 27 September 2025 22:18:10 +0000 (0:00:00.060) 0:02:53.317 **** 2025-09-27 22:19:04.646589 | orchestrator | 2025-09-27 22:19:04.646598 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-27 22:19:04.646607 | orchestrator | Saturday 27 September 2025 22:18:11 +0000 (0:00:00.058) 0:02:53.375 **** 2025-09-27 22:19:04.646615 | orchestrator | 2025-09-27 22:19:04.646624 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-27 22:19:04.646631 | orchestrator | Saturday 27 September 2025 22:18:11 +0000 (0:00:00.059) 0:02:53.435 **** 2025-09-27 22:19:04.646639 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:19:04.646647 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:19:04.646655 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:19:04.646664 | orchestrator | 2025-09-27 22:19:04.646672 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-27 22:19:04.646680 | orchestrator | Saturday 27 September 2025 22:18:41 +0000 (0:00:30.324) 0:03:23.760 **** 2025-09-27 22:19:04.646689 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:19:04.646698 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:19:04.646706 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:19:04.646714 | orchestrator | 2025-09-27 22:19:04.646723 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:19:04.646737 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-27 22:19:04.646747 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-27 22:19:04.646757 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-27 22:19:04.646766 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-27 22:19:04.646775 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-27 22:19:04.646808 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-27 22:19:04.646818 | orchestrator | 2025-09-27 22:19:04.646826 | orchestrator | 2025-09-27 22:19:04.646833 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:19:04.646841 | orchestrator | Saturday 27 September 2025 22:19:03 +0000 (0:00:22.040) 0:03:45.800 **** 2025-09-27 22:19:04.646849 | orchestrator | =============================================================================== 2025-09-27 22:19:04.646857 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 46.71s 2025-09-27 22:19:04.646866 | orchestrator | neutron : Restart neutron-server container ----------------------------- 30.32s 2025-09-27 22:19:04.646875 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 22.04s 2025-09-27 22:19:04.646891 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.33s 2025-09-27 22:19:04.646900 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.40s 2025-09-27 22:19:04.646908 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 4.38s 2025-09-27 22:19:04.646916 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.09s 2025-09-27 22:19:04.646925 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.86s 2025-09-27 22:19:04.646933 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.74s 2025-09-27 22:19:04.646942 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.60s 2025-09-27 22:19:04.646950 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.54s 2025-09-27 22:19:04.646958 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.02s 2025-09-27 22:19:04.646966 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 2.99s 2025-09-27 22:19:04.646975 | orchestrator | neutron : Copying over existing policy file ----------------------------- 2.93s 2025-09-27 22:19:04.646983 | orchestrator | Setting sysctl values --------------------------------------------------- 2.88s 2025-09-27 22:19:04.646992 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.85s 2025-09-27 22:19:04.647000 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.84s 2025-09-27 22:19:04.647007 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 2.80s 2025-09-27 22:19:04.647014 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 2.73s 2025-09-27 22:19:04.647023 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 2.64s 2025-09-27 22:19:07.667025 | orchestrator | 2025-09-27 22:19:07 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:19:07.668963 | orchestrator | 2025-09-27 22:19:07 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:19:07.670060 | orchestrator | 2025-09-27 22:19:07 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:19:07.672484 | orchestrator | 2025-09-27 22:19:07 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:19:07.672532 | orchestrator | 2025-09-27 22:19:07 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:19:10.694379 | orchestrator | 2025-09-27 22:19:10 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:19:10.694923 | orchestrator | 2025-09-27 22:19:10 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:19:10.695490 | orchestrator | 2025-09-27 22:19:10 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:19:10.696420 | orchestrator | 2025-09-27 22:19:10 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:19:10.696454 | orchestrator | 2025-09-27 22:19:10 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:19:13.730472 | orchestrator | 2025-09-27 22:19:13 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:19:13.731497 | orchestrator | 2025-09-27 22:19:13 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:19:13.732535 | orchestrator | 2025-09-27 22:19:13 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:19:13.733283 | orchestrator | 2025-09-27 22:19:13 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:19:13.733305 | orchestrator | 2025-09-27 22:19:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:19:16.753311 | orchestrator | 2025-09-27 22:19:16 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:19:16.755134 | orchestrator | 2025-09-27 22:19:16 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:19:16.755173 | orchestrator | 2025-09-27 22:19:16 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:19:16.755179 | orchestrator | 2025-09-27 22:19:16 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:19:16.755188 | orchestrator | 2025-09-27 22:19:16 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:19:19.793301 | orchestrator | 2025-09-27 22:19:19 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:19:19.793955 | orchestrator | 2025-09-27 22:19:19 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:19:19.795551 | orchestrator | 2025-09-27 22:19:19 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:19:19.798928 | orchestrator | 2025-09-27 22:19:19 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:19:19.798957 | orchestrator | 2025-09-27 22:19:19 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:19:22.834877 | orchestrator | 2025-09-27 22:19:22 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:19:22.835147 | orchestrator | 2025-09-27 22:19:22 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:19:22.839267 | orchestrator | 2025-09-27 22:19:22 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:19:22.839346 | orchestrator | 2025-09-27 22:19:22 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:19:22.839396 | orchestrator | 2025-09-27 22:19:22 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:19:25.866222 | orchestrator | 2025-09-27 22:19:25 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:19:25.866318 | orchestrator | 2025-09-27 22:19:25 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:19:25.867084 | orchestrator | 2025-09-27 22:19:25 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:19:25.867817 | orchestrator | 2025-09-27 22:19:25 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:19:25.867845 | orchestrator | 2025-09-27 22:19:25 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:19:28.912247 | orchestrator | 2025-09-27 22:19:28 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:19:28.914183 | orchestrator | 2025-09-27 22:19:28 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:19:28.915973 | orchestrator | 2025-09-27 22:19:28 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:19:28.917648 | orchestrator | 2025-09-27 22:19:28 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:19:28.917689 | orchestrator | 2025-09-27 22:19:28 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:19:31.955423 | orchestrator | 2025-09-27 22:19:31 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:19:31.956295 | orchestrator | 2025-09-27 22:19:31 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:19:31.957874 | orchestrator | 2025-09-27 22:19:31 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:19:31.958939 | orchestrator | 2025-09-27 22:19:31 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:19:31.959337 | orchestrator | 2025-09-27 22:19:31 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:19:35.000471 | orchestrator | 2025-09-27 22:19:34 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:19:35.006844 | orchestrator | 2025-09-27 22:19:35 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:19:35.012097 | orchestrator | 2025-09-27 22:19:35 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:19:35.015810 | orchestrator | 2025-09-27 22:19:35 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:19:35.015908 | orchestrator | 2025-09-27 22:19:35 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:19:38.044620 | orchestrator | 2025-09-27 22:19:38 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:19:38.044714 | orchestrator | 2025-09-27 22:19:38 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:19:38.045441 | orchestrator | 2025-09-27 22:19:38 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:19:38.046200 | orchestrator | 2025-09-27 22:19:38 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:19:38.046218 | orchestrator | 2025-09-27 22:19:38 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:19:41.079203 | orchestrator | 2025-09-27 22:19:41 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:19:41.079596 | orchestrator | 2025-09-27 22:19:41 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:19:41.084200 | orchestrator | 2025-09-27 22:19:41 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:19:41.085351 | orchestrator | 2025-09-27 22:19:41 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:19:41.086432 | orchestrator | 2025-09-27 22:19:41 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:19:44.127353 | orchestrator | 2025-09-27 22:19:44 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:19:44.128574 | orchestrator | 2025-09-27 22:19:44 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:19:44.130074 | orchestrator | 2025-09-27 22:19:44 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:19:44.131729 | orchestrator | 2025-09-27 22:19:44 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:19:44.131801 | orchestrator | 2025-09-27 22:19:44 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:19:47.185083 | orchestrator | 2025-09-27 22:19:47 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:19:47.185384 | orchestrator | 2025-09-27 22:19:47 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:19:47.186482 | orchestrator | 2025-09-27 22:19:47 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:19:47.187394 | orchestrator | 2025-09-27 22:19:47 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:19:47.187444 | orchestrator | 2025-09-27 22:19:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:19:50.250847 | orchestrator | 2025-09-27 22:19:50 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:19:50.252070 | orchestrator | 2025-09-27 22:19:50 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:19:50.253189 | orchestrator | 2025-09-27 22:19:50 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:19:50.254598 | orchestrator | 2025-09-27 22:19:50 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:19:50.254645 | orchestrator | 2025-09-27 22:19:50 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:19:53.293796 | orchestrator | 2025-09-27 22:19:53 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:19:53.294224 | orchestrator | 2025-09-27 22:19:53 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:19:53.295147 | orchestrator | 2025-09-27 22:19:53 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:19:53.295703 | orchestrator | 2025-09-27 22:19:53 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:19:53.295756 | orchestrator | 2025-09-27 22:19:53 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:19:56.339901 | orchestrator | 2025-09-27 22:19:56 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:19:56.340010 | orchestrator | 2025-09-27 22:19:56 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:19:56.340828 | orchestrator | 2025-09-27 22:19:56 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:19:56.342196 | orchestrator | 2025-09-27 22:19:56 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:19:56.342253 | orchestrator | 2025-09-27 22:19:56 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:19:59.374454 | orchestrator | 2025-09-27 22:19:59 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:19:59.375609 | orchestrator | 2025-09-27 22:19:59 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:19:59.377568 | orchestrator | 2025-09-27 22:19:59 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:19:59.379386 | orchestrator | 2025-09-27 22:19:59 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:19:59.379567 | orchestrator | 2025-09-27 22:19:59 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:20:02.417614 | orchestrator | 2025-09-27 22:20:02 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:20:02.420139 | orchestrator | 2025-09-27 22:20:02 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:20:02.421564 | orchestrator | 2025-09-27 22:20:02 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:20:02.423481 | orchestrator | 2025-09-27 22:20:02 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:20:02.423518 | orchestrator | 2025-09-27 22:20:02 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:20:05.449239 | orchestrator | 2025-09-27 22:20:05 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:20:05.449345 | orchestrator | 2025-09-27 22:20:05 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:20:05.449904 | orchestrator | 2025-09-27 22:20:05 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:20:05.450631 | orchestrator | 2025-09-27 22:20:05 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:20:05.450668 | orchestrator | 2025-09-27 22:20:05 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:20:08.480556 | orchestrator | 2025-09-27 22:20:08 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:20:08.480801 | orchestrator | 2025-09-27 22:20:08 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:20:08.481302 | orchestrator | 2025-09-27 22:20:08 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:20:08.482010 | orchestrator | 2025-09-27 22:20:08 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:20:08.482096 | orchestrator | 2025-09-27 22:20:08 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:20:11.520929 | orchestrator | 2025-09-27 22:20:11 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:20:11.523257 | orchestrator | 2025-09-27 22:20:11 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:20:11.523299 | orchestrator | 2025-09-27 22:20:11 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:20:11.523312 | orchestrator | 2025-09-27 22:20:11 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:20:11.523323 | orchestrator | 2025-09-27 22:20:11 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:20:14.550970 | orchestrator | 2025-09-27 22:20:14 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:20:14.551123 | orchestrator | 2025-09-27 22:20:14 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:20:14.551910 | orchestrator | 2025-09-27 22:20:14 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:20:14.552559 | orchestrator | 2025-09-27 22:20:14 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:20:14.552625 | orchestrator | 2025-09-27 22:20:14 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:20:17.578507 | orchestrator | 2025-09-27 22:20:17 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:20:17.581318 | orchestrator | 2025-09-27 22:20:17 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:20:17.582355 | orchestrator | 2025-09-27 22:20:17 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state STARTED 2025-09-27 22:20:17.583669 | orchestrator | 2025-09-27 22:20:17 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:20:17.583821 | orchestrator | 2025-09-27 22:20:17 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:20:20.613817 | orchestrator | 2025-09-27 22:20:20 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:20:20.614283 | orchestrator | 2025-09-27 22:20:20 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:20:20.614906 | orchestrator | 2025-09-27 22:20:20 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:20:20.615982 | orchestrator | 2025-09-27 22:20:20 | INFO  | Task 211ee314-7b22-406d-a3f9-9202ec0a24a1 is in state SUCCESS 2025-09-27 22:20:20.617498 | orchestrator | 2025-09-27 22:20:20.617533 | orchestrator | 2025-09-27 22:20:20.617542 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:20:20.617550 | orchestrator | 2025-09-27 22:20:20.617558 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:20:20.617566 | orchestrator | Saturday 27 September 2025 22:19:07 +0000 (0:00:00.280) 0:00:00.280 **** 2025-09-27 22:20:20.617573 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:20:20.617582 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:20:20.617589 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:20:20.617597 | orchestrator | 2025-09-27 22:20:20.617604 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:20:20.617633 | orchestrator | Saturday 27 September 2025 22:19:08 +0000 (0:00:00.301) 0:00:00.582 **** 2025-09-27 22:20:20.617641 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-27 22:20:20.617649 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-27 22:20:20.617656 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-27 22:20:20.617664 | orchestrator | 2025-09-27 22:20:20.617671 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-27 22:20:20.617678 | orchestrator | 2025-09-27 22:20:20.617685 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-27 22:20:20.617692 | orchestrator | Saturday 27 September 2025 22:19:08 +0000 (0:00:00.400) 0:00:00.983 **** 2025-09-27 22:20:20.617699 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:20:20.617747 | orchestrator | 2025-09-27 22:20:20.617755 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-27 22:20:20.617762 | orchestrator | Saturday 27 September 2025 22:19:09 +0000 (0:00:00.546) 0:00:01.529 **** 2025-09-27 22:20:20.617769 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-27 22:20:20.617776 | orchestrator | 2025-09-27 22:20:20.617783 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-27 22:20:20.617790 | orchestrator | Saturday 27 September 2025 22:19:13 +0000 (0:00:04.339) 0:00:05.869 **** 2025-09-27 22:20:20.617812 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-27 22:20:20.617820 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-27 22:20:20.617827 | orchestrator | 2025-09-27 22:20:20.617837 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-27 22:20:20.617848 | orchestrator | Saturday 27 September 2025 22:19:20 +0000 (0:00:06.930) 0:00:12.799 **** 2025-09-27 22:20:20.617860 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-27 22:20:20.617870 | orchestrator | 2025-09-27 22:20:20.617880 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-27 22:20:20.617891 | orchestrator | Saturday 27 September 2025 22:19:23 +0000 (0:00:03.598) 0:00:16.398 **** 2025-09-27 22:20:20.617902 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-27 22:20:20.617913 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-27 22:20:20.617924 | orchestrator | 2025-09-27 22:20:20.617934 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-27 22:20:20.617945 | orchestrator | Saturday 27 September 2025 22:19:27 +0000 (0:00:03.985) 0:00:20.383 **** 2025-09-27 22:20:20.617956 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-27 22:20:20.617968 | orchestrator | 2025-09-27 22:20:20.617978 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-27 22:20:20.617985 | orchestrator | Saturday 27 September 2025 22:19:31 +0000 (0:00:03.331) 0:00:23.714 **** 2025-09-27 22:20:20.617992 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-27 22:20:20.617999 | orchestrator | 2025-09-27 22:20:20.618006 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-27 22:20:20.618013 | orchestrator | Saturday 27 September 2025 22:19:35 +0000 (0:00:04.355) 0:00:28.070 **** 2025-09-27 22:20:20.618060 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:20:20.618068 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:20:20.618075 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:20:20.618082 | orchestrator | 2025-09-27 22:20:20.618089 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-27 22:20:20.618097 | orchestrator | Saturday 27 September 2025 22:19:35 +0000 (0:00:00.286) 0:00:28.356 **** 2025-09-27 22:20:20.618122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 22:20:20.618158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 22:20:20.618166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 22:20:20.618174 | orchestrator | 2025-09-27 22:20:20.618182 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-27 22:20:20.618190 | orchestrator | Saturday 27 September 2025 22:19:36 +0000 (0:00:00.935) 0:00:29.292 **** 2025-09-27 22:20:20.618198 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:20:20.618205 | orchestrator | 2025-09-27 22:20:20.618213 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-27 22:20:20.618220 | orchestrator | Saturday 27 September 2025 22:19:36 +0000 (0:00:00.117) 0:00:29.409 **** 2025-09-27 22:20:20.618228 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:20:20.618236 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:20:20.618244 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:20:20.618260 | orchestrator | 2025-09-27 22:20:20.618269 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-27 22:20:20.618277 | orchestrator | Saturday 27 September 2025 22:19:37 +0000 (0:00:00.486) 0:00:29.896 **** 2025-09-27 22:20:20.618285 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:20:20.618292 | orchestrator | 2025-09-27 22:20:20.618300 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-27 22:20:20.618308 | orchestrator | Saturday 27 September 2025 22:19:37 +0000 (0:00:00.501) 0:00:30.397 **** 2025-09-27 22:20:20.618326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 22:20:20.618362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 22:20:20.618377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 22:20:20.618389 | orchestrator | 2025-09-27 22:20:20.618400 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-27 22:20:20.618411 | orchestrator | Saturday 27 September 2025 22:19:39 +0000 (0:00:01.489) 0:00:31.886 **** 2025-09-27 22:20:20.618422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 22:20:20.618434 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:20:20.618446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 22:20:20.618466 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:20:20.618495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 22:20:20.618518 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:20:20.618537 | orchestrator | 2025-09-27 22:20:20.618556 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-27 22:20:20.618576 | orchestrator | Saturday 27 September 2025 22:19:40 +0000 (0:00:00.891) 0:00:32.778 **** 2025-09-27 22:20:20.618597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 22:20:20.618616 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:20:20.618636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 22:20:20.618668 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:20:20.618688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 22:20:20.618737 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:20:20.618758 | orchestrator | 2025-09-27 22:20:20.618774 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-27 22:20:20.618792 | orchestrator | Saturday 27 September 2025 22:19:40 +0000 (0:00:00.678) 0:00:33.457 **** 2025-09-27 22:20:20.618829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 22:20:20.618851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 22:20:20.618871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 22:20:20.618902 | orchestrator | 2025-09-27 22:20:20.618921 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-27 22:20:20.618938 | orchestrator | Saturday 27 September 2025 22:19:42 +0000 (0:00:01.423) 0:00:34.881 **** 2025-09-27 22:20:20.618956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 22:20:20.618983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 22:20:20.619015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 22:20:20.619034 | orchestrator | 2025-09-27 22:20:20.619053 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-27 22:20:20.619070 | orchestrator | Saturday 27 September 2025 22:19:45 +0000 (0:00:02.693) 0:00:37.574 **** 2025-09-27 22:20:20.619088 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-27 22:20:20.619107 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-27 22:20:20.619126 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-27 22:20:20.619144 | orchestrator | 2025-09-27 22:20:20.619162 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-27 22:20:20.619180 | orchestrator | Saturday 27 September 2025 22:19:46 +0000 (0:00:01.833) 0:00:39.407 **** 2025-09-27 22:20:20.619196 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:20:20.619212 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:20:20.619230 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:20:20.619257 | orchestrator | 2025-09-27 22:20:20.619275 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-27 22:20:20.619292 | orchestrator | Saturday 27 September 2025 22:19:48 +0000 (0:00:01.414) 0:00:40.822 **** 2025-09-27 22:20:20.619309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 22:20:20.619327 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:20:20.619344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 22:20:20.619380 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:20:20.619410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 22:20:20.619429 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:20:20.619447 | orchestrator | 2025-09-27 22:20:20.619465 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-27 22:20:20.619483 | orchestrator | Saturday 27 September 2025 22:19:48 +0000 (0:00:00.520) 0:00:41.343 **** 2025-09-27 22:20:20.619501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 22:20:20.619530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 22:20:20.619551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 22:20:20.619570 | orchestrator | 2025-09-27 22:20:20.619590 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-27 22:20:20.619608 | orchestrator | Saturday 27 September 2025 22:19:50 +0000 (0:00:01.702) 0:00:43.046 **** 2025-09-27 22:20:20.619626 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:20:20.619644 | orchestrator | 2025-09-27 22:20:20.619669 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-27 22:20:20.619687 | orchestrator | Saturday 27 September 2025 22:19:53 +0000 (0:00:03.059) 0:00:46.105 **** 2025-09-27 22:20:20.619704 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:20:20.619766 | orchestrator | 2025-09-27 22:20:20.619785 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-27 22:20:20.619804 | orchestrator | Saturday 27 September 2025 22:19:56 +0000 (0:00:02.886) 0:00:48.992 **** 2025-09-27 22:20:20.619823 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:20:20.619841 | orchestrator | 2025-09-27 22:20:20.619859 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-27 22:20:20.619877 | orchestrator | Saturday 27 September 2025 22:20:11 +0000 (0:00:14.840) 0:01:03.832 **** 2025-09-27 22:20:20.619895 | orchestrator | 2025-09-27 22:20:20.619913 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-27 22:20:20.619931 | orchestrator | Saturday 27 September 2025 22:20:11 +0000 (0:00:00.067) 0:01:03.900 **** 2025-09-27 22:20:20.619950 | orchestrator | 2025-09-27 22:20:20.619978 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-27 22:20:20.619996 | orchestrator | Saturday 27 September 2025 22:20:11 +0000 (0:00:00.107) 0:01:04.008 **** 2025-09-27 22:20:20.620015 | orchestrator | 2025-09-27 22:20:20.620032 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-27 22:20:20.620051 | orchestrator | Saturday 27 September 2025 22:20:11 +0000 (0:00:00.109) 0:01:04.117 **** 2025-09-27 22:20:20.620081 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:20:20.620100 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:20:20.620119 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:20:20.620137 | orchestrator | 2025-09-27 22:20:20.620156 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:20:20.620176 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 22:20:20.620196 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-27 22:20:20.620214 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-27 22:20:20.620232 | orchestrator | 2025-09-27 22:20:20.620251 | orchestrator | 2025-09-27 22:20:20.620268 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:20:20.620286 | orchestrator | Saturday 27 September 2025 22:20:17 +0000 (0:00:05.809) 0:01:09.926 **** 2025-09-27 22:20:20.620303 | orchestrator | =============================================================================== 2025-09-27 22:20:20.620320 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.84s 2025-09-27 22:20:20.620336 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.93s 2025-09-27 22:20:20.620352 | orchestrator | placement : Restart placement-api container ----------------------------- 5.81s 2025-09-27 22:20:20.620369 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.36s 2025-09-27 22:20:20.620387 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.34s 2025-09-27 22:20:20.620404 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.99s 2025-09-27 22:20:20.620422 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.60s 2025-09-27 22:20:20.620441 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.33s 2025-09-27 22:20:20.620458 | orchestrator | placement : Creating placement databases -------------------------------- 3.06s 2025-09-27 22:20:20.620477 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.89s 2025-09-27 22:20:20.620493 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.69s 2025-09-27 22:20:20.620512 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.83s 2025-09-27 22:20:20.620530 | orchestrator | placement : Check placement containers ---------------------------------- 1.70s 2025-09-27 22:20:20.620549 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.49s 2025-09-27 22:20:20.620567 | orchestrator | placement : Copying over config.json files for services ----------------- 1.42s 2025-09-27 22:20:20.620584 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.41s 2025-09-27 22:20:20.620602 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.94s 2025-09-27 22:20:20.620620 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.89s 2025-09-27 22:20:20.620637 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.68s 2025-09-27 22:20:20.620655 | orchestrator | placement : include_tasks ----------------------------------------------- 0.55s 2025-09-27 22:20:20.620673 | orchestrator | 2025-09-27 22:20:20 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:20:20.620691 | orchestrator | 2025-09-27 22:20:20 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:20:23.647067 | orchestrator | 2025-09-27 22:20:23 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:20:23.647163 | orchestrator | 2025-09-27 22:20:23 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:20:23.648521 | orchestrator | 2025-09-27 22:20:23 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:20:23.649279 | orchestrator | 2025-09-27 22:20:23 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state STARTED 2025-09-27 22:20:23.649945 | orchestrator | 2025-09-27 22:20:23 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:20:26.672383 | orchestrator | 2025-09-27 22:20:26 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:20:26.672637 | orchestrator | 2025-09-27 22:20:26 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:20:26.673476 | orchestrator | 2025-09-27 22:20:26 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:20:26.674129 | orchestrator | 2025-09-27 22:20:26 | INFO  | Task 0c05156f-85dc-4dcd-84c9-75ee94285043 is in state SUCCESS 2025-09-27 22:20:26.674241 | orchestrator | 2025-09-27 22:20:26 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:20:29.711917 | orchestrator | 2025-09-27 22:20:29 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:20:29.712193 | orchestrator | 2025-09-27 22:20:29 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:20:29.712841 | orchestrator | 2025-09-27 22:20:29 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:20:29.713730 | orchestrator | 2025-09-27 22:20:29 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:20:29.713785 | orchestrator | 2025-09-27 22:20:29 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:20:32.738978 | orchestrator | 2025-09-27 22:20:32 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:20:32.739278 | orchestrator | 2025-09-27 22:20:32 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:20:32.740891 | orchestrator | 2025-09-27 22:20:32 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:20:32.741948 | orchestrator | 2025-09-27 22:20:32 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:20:32.742226 | orchestrator | 2025-09-27 22:20:32 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:20:35.784543 | orchestrator | 2025-09-27 22:20:35 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:20:35.786182 | orchestrator | 2025-09-27 22:20:35 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:20:35.787796 | orchestrator | 2025-09-27 22:20:35 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:20:35.789377 | orchestrator | 2025-09-27 22:20:35 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:20:35.789448 | orchestrator | 2025-09-27 22:20:35 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:20:38.829239 | orchestrator | 2025-09-27 22:20:38 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:20:38.830201 | orchestrator | 2025-09-27 22:20:38 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:20:38.831991 | orchestrator | 2025-09-27 22:20:38 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:20:38.833630 | orchestrator | 2025-09-27 22:20:38 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:20:38.833658 | orchestrator | 2025-09-27 22:20:38 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:20:41.875681 | orchestrator | 2025-09-27 22:20:41 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:20:41.876064 | orchestrator | 2025-09-27 22:20:41 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:20:41.877061 | orchestrator | 2025-09-27 22:20:41 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:20:41.878414 | orchestrator | 2025-09-27 22:20:41 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:20:41.878448 | orchestrator | 2025-09-27 22:20:41 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:20:44.909789 | orchestrator | 2025-09-27 22:20:44 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:20:44.912221 | orchestrator | 2025-09-27 22:20:44 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:20:44.913724 | orchestrator | 2025-09-27 22:20:44 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:20:44.913776 | orchestrator | 2025-09-27 22:20:44 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:20:44.913833 | orchestrator | 2025-09-27 22:20:44 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:20:47.954394 | orchestrator | 2025-09-27 22:20:47 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:20:47.958654 | orchestrator | 2025-09-27 22:20:47 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:20:47.961214 | orchestrator | 2025-09-27 22:20:47 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:20:47.964192 | orchestrator | 2025-09-27 22:20:47 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:20:47.964498 | orchestrator | 2025-09-27 22:20:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:20:51.010378 | orchestrator | 2025-09-27 22:20:51 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:20:51.012183 | orchestrator | 2025-09-27 22:20:51 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:20:51.013058 | orchestrator | 2025-09-27 22:20:51 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:20:51.015383 | orchestrator | 2025-09-27 22:20:51 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:20:51.015436 | orchestrator | 2025-09-27 22:20:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:20:54.057429 | orchestrator | 2025-09-27 22:20:54 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:20:54.058578 | orchestrator | 2025-09-27 22:20:54 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:20:54.060301 | orchestrator | 2025-09-27 22:20:54 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:20:54.061888 | orchestrator | 2025-09-27 22:20:54 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:20:54.061979 | orchestrator | 2025-09-27 22:20:54 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:20:57.097891 | orchestrator | 2025-09-27 22:20:57 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:20:57.098348 | orchestrator | 2025-09-27 22:20:57 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:20:57.101139 | orchestrator | 2025-09-27 22:20:57 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:20:57.102986 | orchestrator | 2025-09-27 22:20:57 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:20:57.103082 | orchestrator | 2025-09-27 22:20:57 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:21:00.149855 | orchestrator | 2025-09-27 22:21:00 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:21:00.151590 | orchestrator | 2025-09-27 22:21:00 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state STARTED 2025-09-27 22:21:00.153086 | orchestrator | 2025-09-27 22:21:00 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:21:00.154680 | orchestrator | 2025-09-27 22:21:00 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:21:00.154950 | orchestrator | 2025-09-27 22:21:00 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:21:03.192163 | orchestrator | 2025-09-27 22:21:03 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:21:03.195255 | orchestrator | 2025-09-27 22:21:03 | INFO  | Task 90e21e64-5a3e-4fc5-839d-d90759be8b8a is in state SUCCESS 2025-09-27 22:21:03.195441 | orchestrator | 2025-09-27 22:21:03.195463 | orchestrator | 2025-09-27 22:21:03.195476 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-27 22:21:03.195488 | orchestrator | 2025-09-27 22:21:03.195499 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-27 22:21:03.195511 | orchestrator | Saturday 27 September 2025 22:18:21 +0000 (0:00:00.099) 0:00:00.099 **** 2025-09-27 22:21:03.195522 | orchestrator | changed: [localhost] 2025-09-27 22:21:03.195534 | orchestrator | 2025-09-27 22:21:03.195545 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-27 22:21:03.195556 | orchestrator | Saturday 27 September 2025 22:18:22 +0000 (0:00:00.830) 0:00:00.929 **** 2025-09-27 22:21:03.195590 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2025-09-27 22:21:03.195602 | orchestrator | changed: [localhost] 2025-09-27 22:21:03.195614 | orchestrator | 2025-09-27 22:21:03.195624 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-27 22:21:03.195635 | orchestrator | Saturday 27 September 2025 22:19:14 +0000 (0:00:51.423) 0:00:52.353 **** 2025-09-27 22:21:03.195646 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2025-09-27 22:21:03.195797 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (2 retries left). 2025-09-27 22:21:03.195814 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (1 retries left). 2025-09-27 22:21:03.195826 | orchestrator | changed: [localhost] 2025-09-27 22:21:03.195837 | orchestrator | 2025-09-27 22:21:03.195848 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:21:03.195858 | orchestrator | 2025-09-27 22:21:03.195869 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:21:03.195880 | orchestrator | Saturday 27 September 2025 22:20:23 +0000 (0:01:09.698) 0:02:02.051 **** 2025-09-27 22:21:03.195891 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:21:03.195902 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:21:03.195914 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:21:03.195925 | orchestrator | 2025-09-27 22:21:03.195936 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:21:03.195948 | orchestrator | Saturday 27 September 2025 22:20:24 +0000 (0:00:00.522) 0:02:02.574 **** 2025-09-27 22:21:03.195959 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-27 22:21:03.195970 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-27 22:21:03.195982 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-27 22:21:03.195993 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-27 22:21:03.196004 | orchestrator | 2025-09-27 22:21:03.196016 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-27 22:21:03.196051 | orchestrator | skipping: no hosts matched 2025-09-27 22:21:03.196064 | orchestrator | 2025-09-27 22:21:03.196078 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:21:03.196091 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:21:03.196106 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:21:03.196122 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:21:03.196134 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:21:03.196147 | orchestrator | 2025-09-27 22:21:03.196160 | orchestrator | 2025-09-27 22:21:03.196172 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:21:03.196185 | orchestrator | Saturday 27 September 2025 22:20:25 +0000 (0:00:00.811) 0:02:03.385 **** 2025-09-27 22:21:03.196197 | orchestrator | =============================================================================== 2025-09-27 22:21:03.196210 | orchestrator | Download ironic-agent kernel ------------------------------------------- 69.70s 2025-09-27 22:21:03.196222 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 51.42s 2025-09-27 22:21:03.196236 | orchestrator | Ensure the destination directory exists --------------------------------- 0.83s 2025-09-27 22:21:03.196248 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2025-09-27 22:21:03.196260 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.52s 2025-09-27 22:21:03.196274 | orchestrator | 2025-09-27 22:21:03.196986 | orchestrator | 2025-09-27 22:21:03.197048 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:21:03.197074 | orchestrator | 2025-09-27 22:21:03.197086 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:21:03.197098 | orchestrator | Saturday 27 September 2025 22:18:15 +0000 (0:00:00.276) 0:00:00.276 **** 2025-09-27 22:21:03.197109 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:21:03.197122 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:21:03.197147 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:21:03.197158 | orchestrator | 2025-09-27 22:21:03.197190 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:21:03.197203 | orchestrator | Saturday 27 September 2025 22:18:15 +0000 (0:00:00.338) 0:00:00.614 **** 2025-09-27 22:21:03.197214 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-27 22:21:03.197226 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-27 22:21:03.197238 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-27 22:21:03.197249 | orchestrator | 2025-09-27 22:21:03.197260 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-27 22:21:03.197272 | orchestrator | 2025-09-27 22:21:03.197283 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-27 22:21:03.197295 | orchestrator | Saturday 27 September 2025 22:18:15 +0000 (0:00:00.457) 0:00:01.071 **** 2025-09-27 22:21:03.197306 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:21:03.197318 | orchestrator | 2025-09-27 22:21:03.197330 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-27 22:21:03.197341 | orchestrator | Saturday 27 September 2025 22:18:16 +0000 (0:00:00.497) 0:00:01.569 **** 2025-09-27 22:21:03.197353 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-27 22:21:03.197364 | orchestrator | 2025-09-27 22:21:03.197375 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-27 22:21:03.197387 | orchestrator | Saturday 27 September 2025 22:18:20 +0000 (0:00:03.736) 0:00:05.305 **** 2025-09-27 22:21:03.197411 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-27 22:21:03.197423 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-27 22:21:03.197435 | orchestrator | 2025-09-27 22:21:03.197454 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-27 22:21:03.197466 | orchestrator | Saturday 27 September 2025 22:18:26 +0000 (0:00:06.426) 0:00:11.733 **** 2025-09-27 22:21:03.197478 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-27 22:21:03.197489 | orchestrator | 2025-09-27 22:21:03.197501 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-27 22:21:03.197515 | orchestrator | Saturday 27 September 2025 22:18:29 +0000 (0:00:03.287) 0:00:15.020 **** 2025-09-27 22:21:03.197528 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-27 22:21:03.197541 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-27 22:21:03.197555 | orchestrator | 2025-09-27 22:21:03.197568 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-27 22:21:03.197581 | orchestrator | Saturday 27 September 2025 22:18:33 +0000 (0:00:04.013) 0:00:19.033 **** 2025-09-27 22:21:03.197594 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-27 22:21:03.197607 | orchestrator | 2025-09-27 22:21:03.197620 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-27 22:21:03.197633 | orchestrator | Saturday 27 September 2025 22:18:37 +0000 (0:00:03.233) 0:00:22.267 **** 2025-09-27 22:21:03.197646 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-27 22:21:03.197660 | orchestrator | 2025-09-27 22:21:03.197753 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-27 22:21:03.197766 | orchestrator | Saturday 27 September 2025 22:18:41 +0000 (0:00:04.228) 0:00:26.495 **** 2025-09-27 22:21:03.197783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 22:21:03.197818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 22:21:03.197833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 22:21:03.197863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 22:21:03.197876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 22:21:03.197888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 22:21:03.197900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.197921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.197933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.197952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.197972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.197984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.197995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.198007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.198084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.198108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.198120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.198137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.198149 | orchestrator | 2025-09-27 22:21:03.198160 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-27 22:21:03.198171 | orchestrator | Saturday 27 September 2025 22:18:44 +0000 (0:00:03.029) 0:00:29.525 **** 2025-09-27 22:21:03.198182 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:21:03.198192 | orchestrator | 2025-09-27 22:21:03.198201 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-27 22:21:03.198211 | orchestrator | Saturday 27 September 2025 22:18:44 +0000 (0:00:00.158) 0:00:29.684 **** 2025-09-27 22:21:03.198220 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:21:03.198231 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:21:03.198240 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:21:03.198250 | orchestrator | 2025-09-27 22:21:03.198260 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-27 22:21:03.198269 | orchestrator | Saturday 27 September 2025 22:18:44 +0000 (0:00:00.307) 0:00:29.992 **** 2025-09-27 22:21:03.198279 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:21:03.198288 | orchestrator | 2025-09-27 22:21:03.198298 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-27 22:21:03.198307 | orchestrator | Saturday 27 September 2025 22:18:45 +0000 (0:00:00.693) 0:00:30.685 **** 2025-09-27 22:21:03.198317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 22:21:03.198340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 22:21:03.198351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 22:21:03.198367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 22:21:03.198377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 22:21:03.198388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 22:21:03.198398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.198420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.198431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.198446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.198457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.198467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.198477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.198505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.198516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.198526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.198540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.198551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.198561 | orchestrator | 2025-09-27 22:21:03.198571 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-27 22:21:03.198581 | orchestrator | Saturday 27 September 2025 22:18:50 +0000 (0:00:05.397) 0:00:36.083 **** 2025-09-27 22:21:03.198591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 22:21:03.198607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 22:21:03.198624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.198634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.198648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.198659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.198689 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:21:03.198700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 22:21:03.198717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 22:21:03.198774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.198785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.198795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.198809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.198820 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:21:03.198830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 22:21:03.198847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 22:21:03.198866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.198877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.198887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.198901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.198912 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:21:03.198921 | orchestrator | 2025-09-27 22:21:03.198931 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-27 22:21:03.198941 | orchestrator | Saturday 27 September 2025 22:18:51 +0000 (0:00:00.668) 0:00:36.752 **** 2025-09-27 22:21:03.199113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 22:21:03.199133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 22:21:03.199150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.199160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.199171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.199186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.199197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 22:21:03.199214 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:21:03.199239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 22:21:03.199256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.199267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.199277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.199293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.199304 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:21:03.199314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 22:21:03.199331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 22:21:03.199342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.199359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.199370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.199380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.199395 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:21:03.199406 | orchestrator | 2025-09-27 22:21:03.199416 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-27 22:21:03.199426 | orchestrator | Saturday 27 September 2025 22:18:52 +0000 (0:00:01.097) 0:00:37.849 **** 2025-09-27 22:21:03.199443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 22:21:03.199453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 22:21:03.199470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 22:21:03.199481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199717 | orchestrator | 2025-09-27 22:21:03.199727 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-27 22:21:03.199746 | orchestrator | Saturday 27 September 2025 22:18:58 +0000 (0:00:05.691) 0:00:43.541 **** 2025-09-27 22:21:03.199761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 22:21:03.199772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 22:21:03.199782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 22:21:03.199799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.199992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.200016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.200027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.200037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.200047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.200065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.200076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.200092 | orchestrator | 2025-09-27 22:21:03.200102 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-27 22:21:03.200111 | orchestrator | Saturday 27 September 2025 22:19:11 +0000 (0:00:13.495) 0:00:57.037 **** 2025-09-27 22:21:03.200121 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-27 22:21:03.200131 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-27 22:21:03.200141 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-27 22:21:03.200150 | orchestrator | 2025-09-27 22:21:03.200160 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-27 22:21:03.200170 | orchestrator | Saturday 27 September 2025 22:19:15 +0000 (0:00:04.190) 0:01:01.227 **** 2025-09-27 22:21:03.200179 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-27 22:21:03.200189 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-27 22:21:03.200199 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-27 22:21:03.200208 | orchestrator | 2025-09-27 22:21:03.200223 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-27 22:21:03.200233 | orchestrator | Saturday 27 September 2025 22:19:18 +0000 (0:00:02.098) 0:01:03.325 **** 2025-09-27 22:21:03.200243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 22:21:03.200253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 22:21:03.200271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 22:21:03.200291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 22:21:03.200302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.200317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.200328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.200338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 22:21:03.200348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.200365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.200381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.200400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 22:21:03.200411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.200421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.200431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.200441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.200464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.200474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.200484 | orchestrator | 2025-09-27 22:21:03.200494 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-27 22:21:03.200504 | orchestrator | Saturday 27 September 2025 22:19:21 +0000 (0:00:03.216) 0:01:06.541 **** 2025-09-27 22:21:03.200519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 22:21:03.200530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 22:21:03.200540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 22:21:03.200562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 22:21:03.200573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.200584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.200598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 22:21:03.200609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.200619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.200635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.200651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.200691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 22:21:03.200707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.200718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.200728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.200738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.201005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.201021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.201031 | orchestrator | 2025-09-27 22:21:03.201041 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-27 22:21:03.201051 | orchestrator | Saturday 27 September 2025 22:19:23 +0000 (0:00:02.626) 0:01:09.168 **** 2025-09-27 22:21:03.201060 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:21:03.201070 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:21:03.201080 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:21:03.201090 | orchestrator | 2025-09-27 22:21:03.201099 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-27 22:21:03.201109 | orchestrator | Saturday 27 September 2025 22:19:24 +0000 (0:00:00.461) 0:01:09.629 **** 2025-09-27 22:21:03.201124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 22:21:03.201135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 22:21:03.201145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.201163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.201182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.201192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.201203 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:21:03.201217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 22:21:03.201228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 22:21:03.201238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.201255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.201271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.201281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.201291 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:21:03.201301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 22:21:03.201317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 22:21:03.201327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.201343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.201353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.201368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 22:21:03.201379 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:21:03.201389 | orchestrator | 2025-09-27 22:21:03.201398 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-27 22:21:03.201408 | orchestrator | Saturday 27 September 2025 22:19:25 +0000 (0:00:01.420) 0:01:11.050 **** 2025-09-27 22:21:03.201418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 22:21:03.201433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 22:21:03.201452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 22:21:03.201463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 22:21:03.201479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 22:21:03.201490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 22:21:03.201505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.201516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.201532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.201542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.201558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.201568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.201578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.201593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.201609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.201622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.201633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.201650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 22:21:03.201661 | orchestrator | 2025-09-27 22:21:03.201694 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-27 22:21:03.201705 | orchestrator | Saturday 27 September 2025 22:19:30 +0000 (0:00:04.492) 0:01:15.542 **** 2025-09-27 22:21:03.201716 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:21:03.201727 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:21:03.201738 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:21:03.201749 | orchestrator | 2025-09-27 22:21:03.201760 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-27 22:21:03.201771 | orchestrator | Saturday 27 September 2025 22:19:30 +0000 (0:00:00.284) 0:01:15.826 **** 2025-09-27 22:21:03.201783 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-27 22:21:03.201794 | orchestrator | 2025-09-27 22:21:03.201810 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-27 22:21:03.201827 | orchestrator | Saturday 27 September 2025 22:19:32 +0000 (0:00:02.349) 0:01:18.176 **** 2025-09-27 22:21:03.201843 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-27 22:21:03.201859 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-27 22:21:03.201877 | orchestrator | 2025-09-27 22:21:03.201895 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-27 22:21:03.201912 | orchestrator | Saturday 27 September 2025 22:19:35 +0000 (0:00:02.530) 0:01:20.706 **** 2025-09-27 22:21:03.201929 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:21:03.201948 | orchestrator | 2025-09-27 22:21:03.201966 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-27 22:21:03.201992 | orchestrator | Saturday 27 September 2025 22:19:52 +0000 (0:00:16.709) 0:01:37.415 **** 2025-09-27 22:21:03.202189 | orchestrator | 2025-09-27 22:21:03.202199 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-27 22:21:03.202210 | orchestrator | Saturday 27 September 2025 22:19:52 +0000 (0:00:00.342) 0:01:37.757 **** 2025-09-27 22:21:03.202219 | orchestrator | 2025-09-27 22:21:03.202229 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-27 22:21:03.202238 | orchestrator | Saturday 27 September 2025 22:19:52 +0000 (0:00:00.068) 0:01:37.825 **** 2025-09-27 22:21:03.202248 | orchestrator | 2025-09-27 22:21:03.202257 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-27 22:21:03.202274 | orchestrator | Saturday 27 September 2025 22:19:52 +0000 (0:00:00.074) 0:01:37.900 **** 2025-09-27 22:21:03.202284 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:21:03.202294 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:21:03.202303 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:21:03.202313 | orchestrator | 2025-09-27 22:21:03.202322 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-27 22:21:03.202332 | orchestrator | Saturday 27 September 2025 22:20:03 +0000 (0:00:10.780) 0:01:48.680 **** 2025-09-27 22:21:03.202341 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:21:03.202351 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:21:03.202360 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:21:03.202370 | orchestrator | 2025-09-27 22:21:03.202379 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-27 22:21:03.202388 | orchestrator | Saturday 27 September 2025 22:20:14 +0000 (0:00:10.980) 0:01:59.661 **** 2025-09-27 22:21:03.202398 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:21:03.202408 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:21:03.202418 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:21:03.202427 | orchestrator | 2025-09-27 22:21:03.202437 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-27 22:21:03.202446 | orchestrator | Saturday 27 September 2025 22:20:24 +0000 (0:00:10.101) 0:02:09.763 **** 2025-09-27 22:21:03.202455 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:21:03.202465 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:21:03.202475 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:21:03.202484 | orchestrator | 2025-09-27 22:21:03.202494 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-27 22:21:03.202503 | orchestrator | Saturday 27 September 2025 22:20:30 +0000 (0:00:05.761) 0:02:15.524 **** 2025-09-27 22:21:03.202513 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:21:03.202522 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:21:03.202531 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:21:03.202541 | orchestrator | 2025-09-27 22:21:03.202551 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-27 22:21:03.202560 | orchestrator | Saturday 27 September 2025 22:20:41 +0000 (0:00:11.672) 0:02:27.196 **** 2025-09-27 22:21:03.202570 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:21:03.202579 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:21:03.202589 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:21:03.202598 | orchestrator | 2025-09-27 22:21:03.202608 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-27 22:21:03.202618 | orchestrator | Saturday 27 September 2025 22:20:53 +0000 (0:00:11.074) 0:02:38.271 **** 2025-09-27 22:21:03.202627 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:21:03.202637 | orchestrator | 2025-09-27 22:21:03.202646 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:21:03.202656 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-27 22:21:03.202807 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 22:21:03.202853 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 22:21:03.202865 | orchestrator | 2025-09-27 22:21:03.202877 | orchestrator | 2025-09-27 22:21:03.202903 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:21:03.202915 | orchestrator | Saturday 27 September 2025 22:21:00 +0000 (0:00:07.529) 0:02:45.800 **** 2025-09-27 22:21:03.202927 | orchestrator | =============================================================================== 2025-09-27 22:21:03.202938 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.71s 2025-09-27 22:21:03.202949 | orchestrator | designate : Copying over designate.conf -------------------------------- 13.50s 2025-09-27 22:21:03.202960 | orchestrator | designate : Restart designate-mdns container --------------------------- 11.67s 2025-09-27 22:21:03.202971 | orchestrator | designate : Restart designate-worker container ------------------------- 11.08s 2025-09-27 22:21:03.202982 | orchestrator | designate : Restart designate-api container ---------------------------- 10.98s 2025-09-27 22:21:03.202993 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 10.78s 2025-09-27 22:21:03.203004 | orchestrator | designate : Restart designate-central container ------------------------ 10.10s 2025-09-27 22:21:03.203016 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.53s 2025-09-27 22:21:03.203027 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.43s 2025-09-27 22:21:03.203038 | orchestrator | designate : Restart designate-producer container ------------------------ 5.76s 2025-09-27 22:21:03.203049 | orchestrator | designate : Copying over config.json files for services ----------------- 5.69s 2025-09-27 22:21:03.203058 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.40s 2025-09-27 22:21:03.203068 | orchestrator | designate : Check designate containers ---------------------------------- 4.49s 2025-09-27 22:21:03.203078 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.23s 2025-09-27 22:21:03.203087 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.19s 2025-09-27 22:21:03.203094 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.01s 2025-09-27 22:21:03.203100 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.74s 2025-09-27 22:21:03.203107 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.29s 2025-09-27 22:21:03.203113 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.23s 2025-09-27 22:21:03.203126 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.22s 2025-09-27 22:21:03.203133 | orchestrator | 2025-09-27 22:21:03 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:21:03.203140 | orchestrator | 2025-09-27 22:21:03 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:21:03.203147 | orchestrator | 2025-09-27 22:21:03 | INFO  | Task 55766764-018b-4497-a069-d60a150b5227 is in state STARTED 2025-09-27 22:21:03.203575 | orchestrator | 2025-09-27 22:21:03 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:21:06.241614 | orchestrator | 2025-09-27 22:21:06 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:21:06.242078 | orchestrator | 2025-09-27 22:21:06 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:21:06.244944 | orchestrator | 2025-09-27 22:21:06 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:21:06.245591 | orchestrator | 2025-09-27 22:21:06 | INFO  | Task 55766764-018b-4497-a069-d60a150b5227 is in state STARTED 2025-09-27 22:21:06.245651 | orchestrator | 2025-09-27 22:21:06 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:21:09.268970 | orchestrator | 2025-09-27 22:21:09 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:21:09.270545 | orchestrator | 2025-09-27 22:21:09 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:21:09.272202 | orchestrator | 2025-09-27 22:21:09 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:21:09.273419 | orchestrator | 2025-09-27 22:21:09 | INFO  | Task 55766764-018b-4497-a069-d60a150b5227 is in state STARTED 2025-09-27 22:21:09.273453 | orchestrator | 2025-09-27 22:21:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:21:12.316868 | orchestrator | 2025-09-27 22:21:12 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:21:12.316953 | orchestrator | 2025-09-27 22:21:12 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:21:12.317074 | orchestrator | 2025-09-27 22:21:12 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:21:12.317742 | orchestrator | 2025-09-27 22:21:12 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:21:12.318232 | orchestrator | 2025-09-27 22:21:12 | INFO  | Task 55766764-018b-4497-a069-d60a150b5227 is in state SUCCESS 2025-09-27 22:21:12.318439 | orchestrator | 2025-09-27 22:21:12 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:21:15.359354 | orchestrator | 2025-09-27 22:21:15 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:21:15.362208 | orchestrator | 2025-09-27 22:21:15 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:21:15.362499 | orchestrator | 2025-09-27 22:21:15 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:21:15.363439 | orchestrator | 2025-09-27 22:21:15 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:21:15.363786 | orchestrator | 2025-09-27 22:21:15 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:21:18.397815 | orchestrator | 2025-09-27 22:21:18 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:21:18.398163 | orchestrator | 2025-09-27 22:21:18 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:21:18.399886 | orchestrator | 2025-09-27 22:21:18 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:21:18.401390 | orchestrator | 2025-09-27 22:21:18 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:21:18.401940 | orchestrator | 2025-09-27 22:21:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:21:21.440320 | orchestrator | 2025-09-27 22:21:21 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:21:21.441625 | orchestrator | 2025-09-27 22:21:21 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:21:21.443425 | orchestrator | 2025-09-27 22:21:21 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:21:21.445696 | orchestrator | 2025-09-27 22:21:21 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:21:21.445895 | orchestrator | 2025-09-27 22:21:21 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:21:24.487173 | orchestrator | 2025-09-27 22:21:24 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:21:24.488755 | orchestrator | 2025-09-27 22:21:24 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:21:24.494093 | orchestrator | 2025-09-27 22:21:24 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:21:24.496120 | orchestrator | 2025-09-27 22:21:24 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:21:24.496169 | orchestrator | 2025-09-27 22:21:24 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:21:27.542764 | orchestrator | 2025-09-27 22:21:27 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:21:27.544362 | orchestrator | 2025-09-27 22:21:27 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:21:27.545823 | orchestrator | 2025-09-27 22:21:27 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:21:27.547169 | orchestrator | 2025-09-27 22:21:27 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:21:27.547222 | orchestrator | 2025-09-27 22:21:27 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:21:30.592690 | orchestrator | 2025-09-27 22:21:30 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:21:30.592942 | orchestrator | 2025-09-27 22:21:30 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:21:30.595585 | orchestrator | 2025-09-27 22:21:30 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:21:30.596967 | orchestrator | 2025-09-27 22:21:30 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:21:30.597019 | orchestrator | 2025-09-27 22:21:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:21:33.638269 | orchestrator | 2025-09-27 22:21:33 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:21:33.640586 | orchestrator | 2025-09-27 22:21:33 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:21:33.643250 | orchestrator | 2025-09-27 22:21:33 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:21:33.645086 | orchestrator | 2025-09-27 22:21:33 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:21:33.645250 | orchestrator | 2025-09-27 22:21:33 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:21:36.684477 | orchestrator | 2025-09-27 22:21:36 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:21:36.695354 | orchestrator | 2025-09-27 22:21:36 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:21:36.699031 | orchestrator | 2025-09-27 22:21:36 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:21:36.700913 | orchestrator | 2025-09-27 22:21:36 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:21:36.701240 | orchestrator | 2025-09-27 22:21:36 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:21:39.743060 | orchestrator | 2025-09-27 22:21:39 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:21:39.744366 | orchestrator | 2025-09-27 22:21:39 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:21:39.745517 | orchestrator | 2025-09-27 22:21:39 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:21:39.746908 | orchestrator | 2025-09-27 22:21:39 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:21:39.747324 | orchestrator | 2025-09-27 22:21:39 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:21:42.792133 | orchestrator | 2025-09-27 22:21:42 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:21:42.793843 | orchestrator | 2025-09-27 22:21:42 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:21:42.795689 | orchestrator | 2025-09-27 22:21:42 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:21:42.798159 | orchestrator | 2025-09-27 22:21:42 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:21:42.798212 | orchestrator | 2025-09-27 22:21:42 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:21:45.841533 | orchestrator | 2025-09-27 22:21:45 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:21:45.843518 | orchestrator | 2025-09-27 22:21:45 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:21:45.845261 | orchestrator | 2025-09-27 22:21:45 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:21:45.847225 | orchestrator | 2025-09-27 22:21:45 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:21:45.847270 | orchestrator | 2025-09-27 22:21:45 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:21:48.906469 | orchestrator | 2025-09-27 22:21:48 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:21:48.908487 | orchestrator | 2025-09-27 22:21:48 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:21:48.911083 | orchestrator | 2025-09-27 22:21:48 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:21:48.912802 | orchestrator | 2025-09-27 22:21:48 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:21:48.912845 | orchestrator | 2025-09-27 22:21:48 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:21:51.967528 | orchestrator | 2025-09-27 22:21:51 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:21:51.970501 | orchestrator | 2025-09-27 22:21:51 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:21:51.972423 | orchestrator | 2025-09-27 22:21:51 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:21:51.974914 | orchestrator | 2025-09-27 22:21:51 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:21:51.975057 | orchestrator | 2025-09-27 22:21:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:21:55.016415 | orchestrator | 2025-09-27 22:21:55 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:21:55.017862 | orchestrator | 2025-09-27 22:21:55 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:21:55.019131 | orchestrator | 2025-09-27 22:21:55 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:21:55.021564 | orchestrator | 2025-09-27 22:21:55 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:21:55.021675 | orchestrator | 2025-09-27 22:21:55 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:21:58.072928 | orchestrator | 2025-09-27 22:21:58 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:21:58.073688 | orchestrator | 2025-09-27 22:21:58 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:21:58.075841 | orchestrator | 2025-09-27 22:21:58 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:21:58.077143 | orchestrator | 2025-09-27 22:21:58 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:21:58.077204 | orchestrator | 2025-09-27 22:21:58 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:22:01.132061 | orchestrator | 2025-09-27 22:22:01 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:22:01.133885 | orchestrator | 2025-09-27 22:22:01 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:22:01.137849 | orchestrator | 2025-09-27 22:22:01 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:22:01.140234 | orchestrator | 2025-09-27 22:22:01 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:22:01.140289 | orchestrator | 2025-09-27 22:22:01 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:22:04.193871 | orchestrator | 2025-09-27 22:22:04 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:22:04.194003 | orchestrator | 2025-09-27 22:22:04 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:22:04.195009 | orchestrator | 2025-09-27 22:22:04 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:22:04.199794 | orchestrator | 2025-09-27 22:22:04 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:22:04.200939 | orchestrator | 2025-09-27 22:22:04 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:22:07.242138 | orchestrator | 2025-09-27 22:22:07 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:22:07.243783 | orchestrator | 2025-09-27 22:22:07 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:22:07.244645 | orchestrator | 2025-09-27 22:22:07 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:22:07.245917 | orchestrator | 2025-09-27 22:22:07 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:22:07.246782 | orchestrator | 2025-09-27 22:22:07 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:22:10.287943 | orchestrator | 2025-09-27 22:22:10 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:22:10.288054 | orchestrator | 2025-09-27 22:22:10 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:22:10.289108 | orchestrator | 2025-09-27 22:22:10 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:22:10.290227 | orchestrator | 2025-09-27 22:22:10 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:22:10.290289 | orchestrator | 2025-09-27 22:22:10 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:22:13.327652 | orchestrator | 2025-09-27 22:22:13 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:22:13.328145 | orchestrator | 2025-09-27 22:22:13 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:22:13.329208 | orchestrator | 2025-09-27 22:22:13 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:22:13.331108 | orchestrator | 2025-09-27 22:22:13 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:22:13.331155 | orchestrator | 2025-09-27 22:22:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:22:16.411273 | orchestrator | 2025-09-27 22:22:16 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:22:16.411369 | orchestrator | 2025-09-27 22:22:16 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:22:16.411381 | orchestrator | 2025-09-27 22:22:16 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:22:16.412037 | orchestrator | 2025-09-27 22:22:16 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:22:16.412067 | orchestrator | 2025-09-27 22:22:16 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:22:19.456344 | orchestrator | 2025-09-27 22:22:19 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:22:19.458397 | orchestrator | 2025-09-27 22:22:19 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:22:19.460944 | orchestrator | 2025-09-27 22:22:19 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:22:19.462769 | orchestrator | 2025-09-27 22:22:19 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:22:19.462811 | orchestrator | 2025-09-27 22:22:19 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:22:22.504859 | orchestrator | 2025-09-27 22:22:22 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:22:22.506145 | orchestrator | 2025-09-27 22:22:22 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:22:22.508687 | orchestrator | 2025-09-27 22:22:22 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state STARTED 2025-09-27 22:22:22.510642 | orchestrator | 2025-09-27 22:22:22 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:22:22.510773 | orchestrator | 2025-09-27 22:22:22 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:22:25.548965 | orchestrator | 2025-09-27 22:22:25 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:22:25.549316 | orchestrator | 2025-09-27 22:22:25 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:22:25.551182 | orchestrator | 2025-09-27 22:22:25 | INFO  | Task 8c90a1f0-db25-4b5c-802f-22ae62647b81 is in state SUCCESS 2025-09-27 22:22:25.552738 | orchestrator | 2025-09-27 22:22:25.552774 | orchestrator | 2025-09-27 22:22:25.552784 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:22:25.552793 | orchestrator | 2025-09-27 22:22:25.552801 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:22:25.552809 | orchestrator | Saturday 27 September 2025 22:21:06 +0000 (0:00:00.256) 0:00:00.256 **** 2025-09-27 22:22:25.552817 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:22:25.552826 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:22:25.552833 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:22:25.552840 | orchestrator | 2025-09-27 22:22:25.552863 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:22:25.552871 | orchestrator | Saturday 27 September 2025 22:21:07 +0000 (0:00:00.621) 0:00:00.878 **** 2025-09-27 22:22:25.552878 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-27 22:22:25.552886 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-27 22:22:25.552894 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-27 22:22:25.552901 | orchestrator | 2025-09-27 22:22:25.552908 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-27 22:22:25.552915 | orchestrator | 2025-09-27 22:22:25.552922 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-27 22:22:25.552930 | orchestrator | Saturday 27 September 2025 22:21:08 +0000 (0:00:01.058) 0:00:01.936 **** 2025-09-27 22:22:25.552937 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:22:25.552944 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:22:25.552951 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:22:25.552958 | orchestrator | 2025-09-27 22:22:25.552965 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:22:25.553050 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:22:25.553066 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:22:25.553073 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:22:25.553080 | orchestrator | 2025-09-27 22:22:25.553088 | orchestrator | 2025-09-27 22:22:25.553096 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:22:25.553103 | orchestrator | Saturday 27 September 2025 22:21:09 +0000 (0:00:01.071) 0:00:03.008 **** 2025-09-27 22:22:25.553111 | orchestrator | =============================================================================== 2025-09-27 22:22:25.553118 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 1.07s 2025-09-27 22:22:25.553125 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.06s 2025-09-27 22:22:25.553133 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.62s 2025-09-27 22:22:25.553140 | orchestrator | 2025-09-27 22:22:25.553148 | orchestrator | 2025-09-27 22:22:25.553155 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:22:25.553162 | orchestrator | 2025-09-27 22:22:25.553182 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:22:25.553189 | orchestrator | Saturday 27 September 2025 22:20:22 +0000 (0:00:00.431) 0:00:00.431 **** 2025-09-27 22:22:25.553197 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:22:25.553205 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:22:25.553212 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:22:25.553219 | orchestrator | 2025-09-27 22:22:25.553227 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:22:25.553234 | orchestrator | Saturday 27 September 2025 22:20:22 +0000 (0:00:00.271) 0:00:00.703 **** 2025-09-27 22:22:25.553241 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-27 22:22:25.553249 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-27 22:22:25.553256 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-27 22:22:25.553264 | orchestrator | 2025-09-27 22:22:25.553271 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-27 22:22:25.553278 | orchestrator | 2025-09-27 22:22:25.553285 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-27 22:22:25.553293 | orchestrator | Saturday 27 September 2025 22:20:23 +0000 (0:00:00.339) 0:00:01.042 **** 2025-09-27 22:22:25.553300 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:22:25.553308 | orchestrator | 2025-09-27 22:22:25.553315 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-27 22:22:25.553323 | orchestrator | Saturday 27 September 2025 22:20:23 +0000 (0:00:00.458) 0:00:01.501 **** 2025-09-27 22:22:25.553331 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-27 22:22:25.553339 | orchestrator | 2025-09-27 22:22:25.553347 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-27 22:22:25.553356 | orchestrator | Saturday 27 September 2025 22:20:27 +0000 (0:00:03.805) 0:00:05.307 **** 2025-09-27 22:22:25.553364 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-27 22:22:25.553372 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-27 22:22:25.553381 | orchestrator | 2025-09-27 22:22:25.553389 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-27 22:22:25.553398 | orchestrator | Saturday 27 September 2025 22:20:34 +0000 (0:00:06.611) 0:00:11.918 **** 2025-09-27 22:22:25.553406 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-27 22:22:25.553421 | orchestrator | 2025-09-27 22:22:25.553429 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-27 22:22:25.553437 | orchestrator | Saturday 27 September 2025 22:20:37 +0000 (0:00:03.583) 0:00:15.502 **** 2025-09-27 22:22:25.553456 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-27 22:22:25.553465 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-27 22:22:25.553474 | orchestrator | 2025-09-27 22:22:25.553482 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-27 22:22:25.553491 | orchestrator | Saturday 27 September 2025 22:20:41 +0000 (0:00:04.017) 0:00:19.520 **** 2025-09-27 22:22:25.553499 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-27 22:22:25.553508 | orchestrator | 2025-09-27 22:22:25.553516 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-27 22:22:25.553529 | orchestrator | Saturday 27 September 2025 22:20:45 +0000 (0:00:03.583) 0:00:23.103 **** 2025-09-27 22:22:25.553536 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-27 22:22:25.553543 | orchestrator | 2025-09-27 22:22:25.553550 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-27 22:22:25.553609 | orchestrator | Saturday 27 September 2025 22:20:49 +0000 (0:00:04.293) 0:00:27.396 **** 2025-09-27 22:22:25.553629 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:22:25.553640 | orchestrator | 2025-09-27 22:22:25.553651 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-27 22:22:25.553662 | orchestrator | Saturday 27 September 2025 22:20:53 +0000 (0:00:03.515) 0:00:30.912 **** 2025-09-27 22:22:25.553674 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:22:25.553684 | orchestrator | 2025-09-27 22:22:25.553695 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-27 22:22:25.553705 | orchestrator | Saturday 27 September 2025 22:20:56 +0000 (0:00:03.877) 0:00:34.790 **** 2025-09-27 22:22:25.553729 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:22:25.553740 | orchestrator | 2025-09-27 22:22:25.553752 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-27 22:22:25.553764 | orchestrator | Saturday 27 September 2025 22:21:00 +0000 (0:00:03.690) 0:00:38.481 **** 2025-09-27 22:22:25.553780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 22:22:25.553797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 22:22:25.553820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 22:22:25.553850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:22:25.553862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:22:25.553869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:22:25.553877 | orchestrator | 2025-09-27 22:22:25.553884 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-27 22:22:25.553892 | orchestrator | Saturday 27 September 2025 22:21:02 +0000 (0:00:01.407) 0:00:39.888 **** 2025-09-27 22:22:25.553899 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:22:25.553906 | orchestrator | 2025-09-27 22:22:25.553913 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-27 22:22:25.553920 | orchestrator | Saturday 27 September 2025 22:21:02 +0000 (0:00:00.146) 0:00:40.034 **** 2025-09-27 22:22:25.553928 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:22:25.553935 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:22:25.553942 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:22:25.553949 | orchestrator | 2025-09-27 22:22:25.553964 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-27 22:22:25.553972 | orchestrator | Saturday 27 September 2025 22:21:02 +0000 (0:00:00.475) 0:00:40.510 **** 2025-09-27 22:22:25.553979 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 22:22:25.553986 | orchestrator | 2025-09-27 22:22:25.553994 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-27 22:22:25.554001 | orchestrator | Saturday 27 September 2025 22:21:03 +0000 (0:00:00.980) 0:00:41.491 **** 2025-09-27 22:22:25.554008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 22:22:25.554120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 22:22:25.554131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 22:22:25.554139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:22:25.554163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:22:25.554172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:22:25.554180 | orchestrator | 2025-09-27 22:22:25.554188 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-27 22:22:25.554196 | orchestrator | Saturday 27 September 2025 22:21:06 +0000 (0:00:03.172) 0:00:44.664 **** 2025-09-27 22:22:25.554205 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:22:25.554213 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:22:25.554221 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:22:25.554229 | orchestrator | 2025-09-27 22:22:25.554237 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-27 22:22:25.554250 | orchestrator | Saturday 27 September 2025 22:21:07 +0000 (0:00:00.484) 0:00:45.148 **** 2025-09-27 22:22:25.554259 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:22:25.554267 | orchestrator | 2025-09-27 22:22:25.554276 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-27 22:22:25.554290 | orchestrator | Saturday 27 September 2025 22:21:08 +0000 (0:00:00.665) 0:00:45.813 **** 2025-09-27 22:22:25.554304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 22:22:25.554313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 22:22:25.554327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 22:22:25.554336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:22:25.554356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:22:25.554365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:22:25.554373 | orchestrator | 2025-09-27 22:22:25.554381 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-27 22:22:25.554389 | orchestrator | Saturday 27 September 2025 22:21:10 +0000 (0:00:02.798) 0:00:48.612 **** 2025-09-27 22:22:25.554398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 22:22:25.554411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:22:25.554420 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:22:25.554429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 22:22:25.554449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:22:25.554465 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:22:25.554474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 22:22:25.554493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:22:25.554502 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:22:25.554510 | orchestrator | 2025-09-27 22:22:25.554518 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-27 22:22:25.554526 | orchestrator | Saturday 27 September 2025 22:21:11 +0000 (0:00:00.622) 0:00:49.235 **** 2025-09-27 22:22:25.554534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 22:22:25.554543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:22:25.554551 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:22:25.554595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 22:22:25.554606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:22:25.554620 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:22:25.554629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 22:22:25.554637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:22:25.554646 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:22:25.554653 | orchestrator | 2025-09-27 22:22:25.554661 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-27 22:22:25.554669 | orchestrator | Saturday 27 September 2025 22:21:12 +0000 (0:00:00.873) 0:00:50.109 **** 2025-09-27 22:22:25.554683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 22:22:25.554697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 22:22:25.554711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 22:22:25.554719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:22:25.554728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:22:25.554742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:22:25.554751 | orchestrator | 2025-09-27 22:22:25.554759 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-27 22:22:25.554767 | orchestrator | Saturday 27 September 2025 22:21:14 +0000 (0:00:02.482) 0:00:52.591 **** 2025-09-27 22:22:25.554779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 22:22:25.554793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 22:22:25.554801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 22:22:25.554809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:22:25.554829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:22:25.554838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:22:25.554851 | orchestrator | 2025-09-27 22:22:25.554859 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-27 22:22:25.554867 | orchestrator | Saturday 27 September 2025 22:21:19 +0000 (0:00:05.008) 0:00:57.599 **** 2025-09-27 22:22:25.554875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 22:22:25.554883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:22:25.554891 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:22:25.554900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 22:22:25.554919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:22:25.554933 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:22:25.554941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 22:22:25.554949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:22:25.554957 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:22:25.554965 | orchestrator | 2025-09-27 22:22:25.554974 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-27 22:22:25.554982 | orchestrator | Saturday 27 September 2025 22:21:20 +0000 (0:00:00.653) 0:00:58.253 **** 2025-09-27 22:22:25.554990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 22:22:25.555004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 22:22:25.555021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 22:22:25.555030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:22:25.555038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:22:25.555046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:22:25.555054 | orchestrator | 2025-09-27 22:22:25.555062 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-27 22:22:25.555070 | orchestrator | Saturday 27 September 2025 22:21:22 +0000 (0:00:02.313) 0:01:00.566 **** 2025-09-27 22:22:25.555078 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:22:25.555086 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:22:25.555094 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:22:25.555102 | orchestrator | 2025-09-27 22:22:25.555110 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-27 22:22:25.555118 | orchestrator | Saturday 27 September 2025 22:21:23 +0000 (0:00:00.278) 0:01:00.844 **** 2025-09-27 22:22:25.555132 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:22:25.555140 | orchestrator | 2025-09-27 22:22:25.555147 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-27 22:22:25.555155 | orchestrator | Saturday 27 September 2025 22:21:25 +0000 (0:00:02.325) 0:01:03.170 **** 2025-09-27 22:22:25.555163 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:22:25.555171 | orchestrator | 2025-09-27 22:22:25.555178 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-27 22:22:25.555186 | orchestrator | Saturday 27 September 2025 22:21:27 +0000 (0:00:02.388) 0:01:05.558 **** 2025-09-27 22:22:25.555199 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:22:25.555207 | orchestrator | 2025-09-27 22:22:25.555215 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-27 22:22:25.555223 | orchestrator | Saturday 27 September 2025 22:21:48 +0000 (0:00:20.290) 0:01:25.849 **** 2025-09-27 22:22:25.555231 | orchestrator | 2025-09-27 22:22:25.555239 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-27 22:22:25.555247 | orchestrator | Saturday 27 September 2025 22:21:48 +0000 (0:00:00.064) 0:01:25.914 **** 2025-09-27 22:22:25.555255 | orchestrator | 2025-09-27 22:22:25.555266 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-27 22:22:25.555274 | orchestrator | Saturday 27 September 2025 22:21:48 +0000 (0:00:00.063) 0:01:25.977 **** 2025-09-27 22:22:25.555282 | orchestrator | 2025-09-27 22:22:25.555290 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-27 22:22:25.555297 | orchestrator | Saturday 27 September 2025 22:21:48 +0000 (0:00:00.065) 0:01:26.043 **** 2025-09-27 22:22:25.555305 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:22:25.555313 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:22:25.555320 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:22:25.555328 | orchestrator | 2025-09-27 22:22:25.555336 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-27 22:22:25.555344 | orchestrator | Saturday 27 September 2025 22:22:07 +0000 (0:00:18.828) 0:01:44.871 **** 2025-09-27 22:22:25.555352 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:22:25.555360 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:22:25.555368 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:22:25.555375 | orchestrator | 2025-09-27 22:22:25.555383 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:22:25.555392 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 22:22:25.555400 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-27 22:22:25.555408 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-27 22:22:25.555417 | orchestrator | 2025-09-27 22:22:25.555424 | orchestrator | 2025-09-27 22:22:25.555432 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:22:25.555440 | orchestrator | Saturday 27 September 2025 22:22:23 +0000 (0:00:16.500) 0:02:01.372 **** 2025-09-27 22:22:25.555448 | orchestrator | =============================================================================== 2025-09-27 22:22:25.555455 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 20.29s 2025-09-27 22:22:25.555463 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.83s 2025-09-27 22:22:25.555471 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 16.50s 2025-09-27 22:22:25.555479 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.61s 2025-09-27 22:22:25.555487 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.01s 2025-09-27 22:22:25.555494 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.29s 2025-09-27 22:22:25.555510 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.02s 2025-09-27 22:22:25.555518 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.88s 2025-09-27 22:22:25.555526 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.81s 2025-09-27 22:22:25.555533 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.69s 2025-09-27 22:22:25.555542 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.58s 2025-09-27 22:22:25.555549 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.58s 2025-09-27 22:22:25.555557 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.52s 2025-09-27 22:22:25.555585 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.17s 2025-09-27 22:22:25.555593 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.80s 2025-09-27 22:22:25.555601 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.48s 2025-09-27 22:22:25.555609 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.39s 2025-09-27 22:22:25.555617 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.33s 2025-09-27 22:22:25.555624 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.31s 2025-09-27 22:22:25.555632 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.41s 2025-09-27 22:22:25.555641 | orchestrator | 2025-09-27 22:22:25 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:22:25.555649 | orchestrator | 2025-09-27 22:22:25 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:22:28.605343 | orchestrator | 2025-09-27 22:22:28 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:22:28.606159 | orchestrator | 2025-09-27 22:22:28 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:22:28.607525 | orchestrator | 2025-09-27 22:22:28 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:22:28.607636 | orchestrator | 2025-09-27 22:22:28 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:22:31.656541 | orchestrator | 2025-09-27 22:22:31 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:22:31.657955 | orchestrator | 2025-09-27 22:22:31 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:22:31.660528 | orchestrator | 2025-09-27 22:22:31 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:22:31.660661 | orchestrator | 2025-09-27 22:22:31 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:22:34.704253 | orchestrator | 2025-09-27 22:22:34 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:22:34.706729 | orchestrator | 2025-09-27 22:22:34 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:22:34.709228 | orchestrator | 2025-09-27 22:22:34 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:22:34.709544 | orchestrator | 2025-09-27 22:22:34 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:22:37.750142 | orchestrator | 2025-09-27 22:22:37 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:22:37.752841 | orchestrator | 2025-09-27 22:22:37 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:22:37.754888 | orchestrator | 2025-09-27 22:22:37 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:22:37.754948 | orchestrator | 2025-09-27 22:22:37 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:22:40.785438 | orchestrator | 2025-09-27 22:22:40 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:22:40.786530 | orchestrator | 2025-09-27 22:22:40 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:22:40.787211 | orchestrator | 2025-09-27 22:22:40 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:22:40.787234 | orchestrator | 2025-09-27 22:22:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:22:43.823339 | orchestrator | 2025-09-27 22:22:43 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:22:43.826714 | orchestrator | 2025-09-27 22:22:43 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:22:43.829618 | orchestrator | 2025-09-27 22:22:43 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:22:43.829884 | orchestrator | 2025-09-27 22:22:43 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:22:46.878119 | orchestrator | 2025-09-27 22:22:46 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:22:46.879890 | orchestrator | 2025-09-27 22:22:46 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:22:46.883434 | orchestrator | 2025-09-27 22:22:46 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:22:46.883459 | orchestrator | 2025-09-27 22:22:46 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:22:49.930904 | orchestrator | 2025-09-27 22:22:49 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:22:49.932073 | orchestrator | 2025-09-27 22:22:49 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:22:49.933449 | orchestrator | 2025-09-27 22:22:49 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:22:49.933604 | orchestrator | 2025-09-27 22:22:49 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:22:52.978446 | orchestrator | 2025-09-27 22:22:52 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:22:52.979571 | orchestrator | 2025-09-27 22:22:52 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:22:52.981368 | orchestrator | 2025-09-27 22:22:52 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:22:52.981418 | orchestrator | 2025-09-27 22:22:52 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:22:56.030916 | orchestrator | 2025-09-27 22:22:56 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:22:56.033045 | orchestrator | 2025-09-27 22:22:56 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:22:56.034199 | orchestrator | 2025-09-27 22:22:56 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:22:56.034251 | orchestrator | 2025-09-27 22:22:56 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:22:59.079799 | orchestrator | 2025-09-27 22:22:59 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:22:59.079886 | orchestrator | 2025-09-27 22:22:59 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:22:59.080277 | orchestrator | 2025-09-27 22:22:59 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:22:59.080582 | orchestrator | 2025-09-27 22:22:59 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:23:02.117257 | orchestrator | 2025-09-27 22:23:02 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:23:02.117395 | orchestrator | 2025-09-27 22:23:02 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state STARTED 2025-09-27 22:23:02.118204 | orchestrator | 2025-09-27 22:23:02 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:23:02.118823 | orchestrator | 2025-09-27 22:23:02 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:23:05.144216 | orchestrator | 2025-09-27 22:23:05 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:23:05.145351 | orchestrator | 2025-09-27 22:23:05 | INFO  | Task b5ef0e86-3b33-476c-8899-e22a503d7cb5 is in state SUCCESS 2025-09-27 22:23:05.146736 | orchestrator | 2025-09-27 22:23:05.146796 | orchestrator | 2025-09-27 22:23:05.146806 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:23:05.146813 | orchestrator | 2025-09-27 22:23:05.146820 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:23:05.146828 | orchestrator | Saturday 27 September 2025 22:20:29 +0000 (0:00:00.250) 0:00:00.250 **** 2025-09-27 22:23:05.146834 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:23:05.146842 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:23:05.146848 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:23:05.146854 | orchestrator | 2025-09-27 22:23:05.146861 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:23:05.146868 | orchestrator | Saturday 27 September 2025 22:20:29 +0000 (0:00:00.315) 0:00:00.566 **** 2025-09-27 22:23:05.146874 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-27 22:23:05.146881 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-27 22:23:05.146888 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-27 22:23:05.146894 | orchestrator | 2025-09-27 22:23:05.146900 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-27 22:23:05.146907 | orchestrator | 2025-09-27 22:23:05.146913 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-27 22:23:05.146921 | orchestrator | Saturday 27 September 2025 22:20:30 +0000 (0:00:00.589) 0:00:01.156 **** 2025-09-27 22:23:05.146925 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:23:05.146930 | orchestrator | 2025-09-27 22:23:05.146935 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-27 22:23:05.146939 | orchestrator | Saturday 27 September 2025 22:20:31 +0000 (0:00:01.190) 0:00:02.346 **** 2025-09-27 22:23:05.146976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 22:23:05.146985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 22:23:05.146990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 22:23:05.147025 | orchestrator | 2025-09-27 22:23:05.147086 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-27 22:23:05.147096 | orchestrator | Saturday 27 September 2025 22:20:32 +0000 (0:00:00.918) 0:00:03.265 **** 2025-09-27 22:23:05.147169 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-27 22:23:05.147177 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-27 22:23:05.147182 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 22:23:05.147186 | orchestrator | 2025-09-27 22:23:05.147190 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-27 22:23:05.147258 | orchestrator | Saturday 27 September 2025 22:20:33 +0000 (0:00:00.726) 0:00:03.991 **** 2025-09-27 22:23:05.147269 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:23:05.147287 | orchestrator | 2025-09-27 22:23:05.147301 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-27 22:23:05.147309 | orchestrator | Saturday 27 September 2025 22:20:33 +0000 (0:00:00.567) 0:00:04.559 **** 2025-09-27 22:23:05.147330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 22:23:05.147337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 22:23:05.147342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 22:23:05.147346 | orchestrator | 2025-09-27 22:23:05.147351 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-27 22:23:05.147363 | orchestrator | Saturday 27 September 2025 22:20:35 +0000 (0:00:01.319) 0:00:05.878 **** 2025-09-27 22:23:05.147368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-27 22:23:05.147372 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:05.147382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-27 22:23:05.147386 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:05.147395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-27 22:23:05.147400 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:05.147405 | orchestrator | 2025-09-27 22:23:05.147409 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-27 22:23:05.147414 | orchestrator | Saturday 27 September 2025 22:20:35 +0000 (0:00:00.324) 0:00:06.203 **** 2025-09-27 22:23:05.147418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-27 22:23:05.147423 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:05.147428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-27 22:23:05.147436 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:05.147441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-27 22:23:05.147446 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:05.147450 | orchestrator | 2025-09-27 22:23:05.147455 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-27 22:23:05.147459 | orchestrator | Saturday 27 September 2025 22:20:36 +0000 (0:00:00.828) 0:00:07.031 **** 2025-09-27 22:23:05.147464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 22:23:05.147472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 22:23:05.147481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 22:23:05.147486 | orchestrator | 2025-09-27 22:23:05.147491 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-27 22:23:05.147495 | orchestrator | Saturday 27 September 2025 22:20:37 +0000 (0:00:01.223) 0:00:08.255 **** 2025-09-27 22:23:05.147500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 22:23:05.147559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 22:23:05.147565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 22:23:05.147570 | orchestrator | 2025-09-27 22:23:05.147575 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-27 22:23:05.147579 | orchestrator | Saturday 27 September 2025 22:20:38 +0000 (0:00:01.302) 0:00:09.558 **** 2025-09-27 22:23:05.147584 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:05.147589 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:05.147593 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:05.147598 | orchestrator | 2025-09-27 22:23:05.147602 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-27 22:23:05.147607 | orchestrator | Saturday 27 September 2025 22:20:39 +0000 (0:00:00.552) 0:00:10.110 **** 2025-09-27 22:23:05.147611 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-27 22:23:05.147616 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-27 22:23:05.147624 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-27 22:23:05.147628 | orchestrator | 2025-09-27 22:23:05.147633 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-27 22:23:05.147637 | orchestrator | Saturday 27 September 2025 22:20:40 +0000 (0:00:01.272) 0:00:11.383 **** 2025-09-27 22:23:05.147642 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-27 22:23:05.147647 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-27 22:23:05.147651 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-27 22:23:05.147656 | orchestrator | 2025-09-27 22:23:05.147660 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-27 22:23:05.147664 | orchestrator | Saturday 27 September 2025 22:20:42 +0000 (0:00:01.343) 0:00:12.726 **** 2025-09-27 22:23:05.147672 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 22:23:05.147676 | orchestrator | 2025-09-27 22:23:05.147680 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-27 22:23:05.147685 | orchestrator | Saturday 27 September 2025 22:20:43 +0000 (0:00:01.493) 0:00:14.219 **** 2025-09-27 22:23:05.147691 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-27 22:23:05.147700 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-27 22:23:05.147708 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:23:05.147753 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:23:05.147761 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:23:05.147767 | orchestrator | 2025-09-27 22:23:05.147774 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-27 22:23:05.147781 | orchestrator | Saturday 27 September 2025 22:20:44 +0000 (0:00:01.148) 0:00:15.368 **** 2025-09-27 22:23:05.147787 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:05.147805 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:05.147811 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:05.147821 | orchestrator | 2025-09-27 22:23:05.147825 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-27 22:23:05.147829 | orchestrator | Saturday 27 September 2025 22:20:45 +0000 (0:00:01.041) 0:00:16.409 **** 2025-09-27 22:23:05.147834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1083149, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.536002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.147864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1083149, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.536002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.147870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1083149, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.536002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.147878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1083284, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5722756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.147886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1083284, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5722756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.147902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1083284, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5722756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.147907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1083177, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.538785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.147917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1083177, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.538785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.147921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1083177, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.538785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.147926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1083291, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5734086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.147932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1083291, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5734086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.147966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1083291, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5734086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.147984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1083187, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5420187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.147992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1083187, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5420187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.147998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1083187, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5420187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1083274, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5709035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1083274, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5709035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1083274, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5709035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1083148, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5240264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1083148, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5240264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1083148, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5240264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1083171, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.536785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1083171, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.536785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1083171, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.536785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1083179, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.539785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1083179, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.539785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1083179, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.539785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1083191, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.543972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1083191, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.543972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1083191, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.543972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1083281, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5718195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1083281, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5718195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1083281, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5718195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1072606, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5377848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1072606, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5377848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1072606, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5377848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1083266, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.569421, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1083266, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.569421, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1083266, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.569421, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1083188, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.542785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1083188, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.542785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1083188, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.542785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1083185, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5420187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1083185, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5420187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1083185, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5420187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1083182, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5411847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1083182, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5411847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1083182, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5411847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1083193, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5667853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1083193, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5667853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1083193, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5667853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1083181, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.539785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1083181, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.539785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1083181, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.539785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1083278, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.571361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1083278, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.571361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1083278, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.571361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1083505, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6150026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1083505, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6150026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1083505, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6150026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1083347, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5921996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1083347, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5921996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1083347, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5921996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1083315, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.582274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1083315, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.582274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1083315, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.582274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1083406, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5984879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1083406, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5984879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1083406, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5984879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1083303, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5797856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1083303, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5797856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1083303, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5797856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1083467, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6083739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1083467, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6083739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1083467, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6083739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1083408, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6050642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1083408, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6050642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1083408, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6050642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1083477, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.608932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1083477, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.608932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1083502, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.612786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1083477, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.608932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1083502, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.612786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1083458, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6072361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1083502, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.612786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1083458, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6072361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1083392, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5971758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1083458, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6072361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1083392, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5971758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1083338, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5878391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1083392, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5971758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1083338, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5878391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1083375, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.595167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1083375, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.595167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1083338, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5878391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1083320, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5847857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1083320, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5847857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1083375, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.595167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1083401, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5984879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1083401, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5984879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1083320, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5847857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1083494, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.612786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1083494, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.612786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.148995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1083401, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5984879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.149002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1083484, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6108375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.149010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1083484, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6108375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.149014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1083494, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.612786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.149018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1083306, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5809522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.149026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1083306, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5809522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.149030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1083484, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6108375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.149038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1083311, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.581279, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.149046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1083311, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.581279, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.149051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1083306, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.5809522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.149056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1083449, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6066332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.149074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1083449, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6066332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.149079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1083311, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.581279, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.149087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1083479, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6094866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.149092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1083479, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6094866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.149099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1083449, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6066332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.149110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1083479, 'dev': 103, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759008808.6094866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 22:23:05.149118 | orchestrator | 2025-09-27 22:23:05.149123 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-27 22:23:05.149128 | orchestrator | Saturday 27 September 2025 22:21:24 +0000 (0:00:38.247) 0:00:54.657 **** 2025-09-27 22:23:05.149133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 22:23:05.149138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 22:23:05.149145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 22:23:05.149150 | orchestrator | 2025-09-27 22:23:05.149155 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-27 22:23:05.149160 | orchestrator | Saturday 27 September 2025 22:21:25 +0000 (0:00:01.084) 0:00:55.741 **** 2025-09-27 22:23:05.149164 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:05.149168 | orchestrator | 2025-09-27 22:23:05.149172 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-27 22:23:05.149176 | orchestrator | Saturday 27 September 2025 22:21:27 +0000 (0:00:02.485) 0:00:58.227 **** 2025-09-27 22:23:05.149180 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:05.149184 | orchestrator | 2025-09-27 22:23:05.149188 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-27 22:23:05.149192 | orchestrator | Saturday 27 September 2025 22:21:29 +0000 (0:00:02.171) 0:01:00.398 **** 2025-09-27 22:23:05.149196 | orchestrator | 2025-09-27 22:23:05.149200 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-27 22:23:05.149231 | orchestrator | Saturday 27 September 2025 22:21:29 +0000 (0:00:00.068) 0:01:00.466 **** 2025-09-27 22:23:05.149237 | orchestrator | 2025-09-27 22:23:05.149240 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-27 22:23:05.149248 | orchestrator | Saturday 27 September 2025 22:21:29 +0000 (0:00:00.068) 0:01:00.535 **** 2025-09-27 22:23:05.149251 | orchestrator | 2025-09-27 22:23:05.149255 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-27 22:23:05.149259 | orchestrator | Saturday 27 September 2025 22:21:30 +0000 (0:00:00.310) 0:01:00.846 **** 2025-09-27 22:23:05.149263 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:05.149267 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:05.149271 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:05.149275 | orchestrator | 2025-09-27 22:23:05.149279 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-27 22:23:05.149283 | orchestrator | Saturday 27 September 2025 22:21:32 +0000 (0:00:01.930) 0:01:02.776 **** 2025-09-27 22:23:05.149287 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:05.149291 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:05.149295 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-27 22:23:05.149299 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-27 22:23:05.149303 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-27 22:23:05.149307 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2025-09-27 22:23:05.149311 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:23:05.149315 | orchestrator | 2025-09-27 22:23:05.149320 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-27 22:23:05.149324 | orchestrator | Saturday 27 September 2025 22:22:23 +0000 (0:00:51.032) 0:01:53.809 **** 2025-09-27 22:23:05.149328 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:05.149332 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:23:05.149336 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:23:05.149340 | orchestrator | 2025-09-27 22:23:05.149344 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-27 22:23:05.149348 | orchestrator | Saturday 27 September 2025 22:22:57 +0000 (0:00:34.060) 0:02:27.870 **** 2025-09-27 22:23:05.149352 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:23:05.149356 | orchestrator | 2025-09-27 22:23:05.149359 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-27 22:23:05.149363 | orchestrator | Saturday 27 September 2025 22:22:59 +0000 (0:00:02.238) 0:02:30.109 **** 2025-09-27 22:23:05.149367 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:05.149371 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:05.149375 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:05.149379 | orchestrator | 2025-09-27 22:23:05.149383 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-27 22:23:05.149387 | orchestrator | Saturday 27 September 2025 22:23:00 +0000 (0:00:01.302) 0:02:31.411 **** 2025-09-27 22:23:05.149391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-27 22:23:05.149396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-27 22:23:05.149400 | orchestrator | 2025-09-27 22:23:05.149404 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-27 22:23:05.149408 | orchestrator | Saturday 27 September 2025 22:23:03 +0000 (0:00:02.491) 0:02:33.903 **** 2025-09-27 22:23:05.149412 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:05.149419 | orchestrator | 2025-09-27 22:23:05.149423 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:23:05.149429 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-27 22:23:05.149439 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-27 22:23:05.149446 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-27 22:23:05.149452 | orchestrator | 2025-09-27 22:23:05.149459 | orchestrator | 2025-09-27 22:23:05.149465 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:23:05.149471 | orchestrator | Saturday 27 September 2025 22:23:03 +0000 (0:00:00.468) 0:02:34.371 **** 2025-09-27 22:23:05.149477 | orchestrator | =============================================================================== 2025-09-27 22:23:05.149484 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 51.03s 2025-09-27 22:23:05.149491 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.25s 2025-09-27 22:23:05.149497 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 34.06s 2025-09-27 22:23:05.149503 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.49s 2025-09-27 22:23:05.149604 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.49s 2025-09-27 22:23:05.149631 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.24s 2025-09-27 22:23:05.149636 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.17s 2025-09-27 22:23:05.149640 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.93s 2025-09-27 22:23:05.149643 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.49s 2025-09-27 22:23:05.149648 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.34s 2025-09-27 22:23:05.149657 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.32s 2025-09-27 22:23:05.149662 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.30s 2025-09-27 22:23:05.149666 | orchestrator | grafana : Remove old grafana docker volume ------------------------------ 1.30s 2025-09-27 22:23:05.149670 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.27s 2025-09-27 22:23:05.149674 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.22s 2025-09-27 22:23:05.149678 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.19s 2025-09-27 22:23:05.149682 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 1.15s 2025-09-27 22:23:05.149686 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.08s 2025-09-27 22:23:05.149690 | orchestrator | grafana : Prune templated Grafana dashboards ---------------------------- 1.04s 2025-09-27 22:23:05.149694 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.92s 2025-09-27 22:23:05.149698 | orchestrator | 2025-09-27 22:23:05 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:23:05.149703 | orchestrator | 2025-09-27 22:23:05 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:23:08.172160 | orchestrator | 2025-09-27 22:23:08 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:23:08.172298 | orchestrator | 2025-09-27 22:23:08 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:23:08.172413 | orchestrator | 2025-09-27 22:23:08 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:23:11.207945 | orchestrator | 2025-09-27 22:23:11 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:23:11.208273 | orchestrator | 2025-09-27 22:23:11 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:23:11.208301 | orchestrator | 2025-09-27 22:23:11 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:23:14.259906 | orchestrator | 2025-09-27 22:23:14 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:23:14.261684 | orchestrator | 2025-09-27 22:23:14 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:23:14.261771 | orchestrator | 2025-09-27 22:23:14 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:23:17.310286 | orchestrator | 2025-09-27 22:23:17 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:23:17.312016 | orchestrator | 2025-09-27 22:23:17 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:23:17.312418 | orchestrator | 2025-09-27 22:23:17 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:23:20.356815 | orchestrator | 2025-09-27 22:23:20 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:23:20.357400 | orchestrator | 2025-09-27 22:23:20 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:23:20.357428 | orchestrator | 2025-09-27 22:23:20 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:23:23.403113 | orchestrator | 2025-09-27 22:23:23 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:23:23.403576 | orchestrator | 2025-09-27 22:23:23 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:23:23.403614 | orchestrator | 2025-09-27 22:23:23 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:23:26.448952 | orchestrator | 2025-09-27 22:23:26 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:23:26.450233 | orchestrator | 2025-09-27 22:23:26 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:23:26.450746 | orchestrator | 2025-09-27 22:23:26 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:23:29.499382 | orchestrator | 2025-09-27 22:23:29 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:23:29.500358 | orchestrator | 2025-09-27 22:23:29 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:23:29.500448 | orchestrator | 2025-09-27 22:23:29 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:23:32.543330 | orchestrator | 2025-09-27 22:23:32 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:23:32.544861 | orchestrator | 2025-09-27 22:23:32 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:23:32.545071 | orchestrator | 2025-09-27 22:23:32 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:23:35.589111 | orchestrator | 2025-09-27 22:23:35 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:23:35.590569 | orchestrator | 2025-09-27 22:23:35 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:23:35.590719 | orchestrator | 2025-09-27 22:23:35 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:23:38.628649 | orchestrator | 2025-09-27 22:23:38 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:23:38.630455 | orchestrator | 2025-09-27 22:23:38 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:23:38.630603 | orchestrator | 2025-09-27 22:23:38 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:23:41.676307 | orchestrator | 2025-09-27 22:23:41 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:23:41.677577 | orchestrator | 2025-09-27 22:23:41 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:23:41.677630 | orchestrator | 2025-09-27 22:23:41 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:23:44.728652 | orchestrator | 2025-09-27 22:23:44 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:23:44.728943 | orchestrator | 2025-09-27 22:23:44 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state STARTED 2025-09-27 22:23:44.729773 | orchestrator | 2025-09-27 22:23:44 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:23:47.772006 | orchestrator | 2025-09-27 22:23:47 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:23:47.775215 | orchestrator | 2025-09-27 22:23:47 | INFO  | Task 5b8911fb-abba-4722-8b64-9077ae96e56b is in state SUCCESS 2025-09-27 22:23:47.777599 | orchestrator | 2025-09-27 22:23:47.777661 | orchestrator | 2025-09-27 22:23:47.777671 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:23:47.777679 | orchestrator | 2025-09-27 22:23:47.777686 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-27 22:23:47.777693 | orchestrator | Saturday 27 September 2025 22:14:53 +0000 (0:00:00.258) 0:00:00.258 **** 2025-09-27 22:23:47.777700 | orchestrator | changed: [testbed-manager] 2025-09-27 22:23:47.777707 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:47.777714 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:23:47.777720 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:23:47.777727 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:23:47.777733 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:23:47.777739 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:23:47.777745 | orchestrator | 2025-09-27 22:23:47.777751 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:23:47.777757 | orchestrator | Saturday 27 September 2025 22:14:54 +0000 (0:00:00.705) 0:00:00.964 **** 2025-09-27 22:23:47.777763 | orchestrator | changed: [testbed-manager] 2025-09-27 22:23:47.777770 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:47.777776 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:23:47.777782 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:23:47.777788 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:23:47.777794 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:23:47.777800 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:23:47.777806 | orchestrator | 2025-09-27 22:23:47.777854 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:23:47.777865 | orchestrator | Saturday 27 September 2025 22:14:54 +0000 (0:00:00.602) 0:00:01.566 **** 2025-09-27 22:23:47.777876 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-27 22:23:47.777890 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-27 22:23:47.777904 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-27 22:23:47.777930 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-27 22:23:47.777940 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-27 22:23:47.777950 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-27 22:23:47.777960 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-27 22:23:47.778006 | orchestrator | 2025-09-27 22:23:47.778070 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-27 22:23:47.778081 | orchestrator | 2025-09-27 22:23:47.778088 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-27 22:23:47.778094 | orchestrator | Saturday 27 September 2025 22:14:55 +0000 (0:00:00.773) 0:00:02.340 **** 2025-09-27 22:23:47.778101 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:23:47.778184 | orchestrator | 2025-09-27 22:23:47.778192 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-27 22:23:47.778200 | orchestrator | Saturday 27 September 2025 22:14:56 +0000 (0:00:00.779) 0:00:03.119 **** 2025-09-27 22:23:47.778208 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-27 22:23:47.778216 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-27 22:23:47.778223 | orchestrator | 2025-09-27 22:23:47.778230 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-27 22:23:47.778237 | orchestrator | Saturday 27 September 2025 22:15:00 +0000 (0:00:03.893) 0:00:07.013 **** 2025-09-27 22:23:47.778245 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-27 22:23:47.778252 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-27 22:23:47.778259 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:47.778266 | orchestrator | 2025-09-27 22:23:47.778273 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-27 22:23:47.778280 | orchestrator | Saturday 27 September 2025 22:15:03 +0000 (0:00:03.757) 0:00:10.771 **** 2025-09-27 22:23:47.778286 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:47.778293 | orchestrator | 2025-09-27 22:23:47.778300 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-27 22:23:47.778307 | orchestrator | Saturday 27 September 2025 22:15:04 +0000 (0:00:00.645) 0:00:11.417 **** 2025-09-27 22:23:47.778314 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:47.778321 | orchestrator | 2025-09-27 22:23:47.778328 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-27 22:23:47.778335 | orchestrator | Saturday 27 September 2025 22:15:06 +0000 (0:00:01.424) 0:00:12.841 **** 2025-09-27 22:23:47.778342 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:47.778349 | orchestrator | 2025-09-27 22:23:47.778356 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-27 22:23:47.778363 | orchestrator | Saturday 27 September 2025 22:15:09 +0000 (0:00:03.253) 0:00:16.094 **** 2025-09-27 22:23:47.778370 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.778377 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.778384 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.778391 | orchestrator | 2025-09-27 22:23:47.778407 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-27 22:23:47.778414 | orchestrator | Saturday 27 September 2025 22:15:09 +0000 (0:00:00.251) 0:00:16.346 **** 2025-09-27 22:23:47.778422 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:23:47.778428 | orchestrator | 2025-09-27 22:23:47.778435 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-27 22:23:47.778442 | orchestrator | Saturday 27 September 2025 22:15:40 +0000 (0:00:31.259) 0:00:47.606 **** 2025-09-27 22:23:47.778449 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:47.778477 | orchestrator | 2025-09-27 22:23:47.778485 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-27 22:23:47.778493 | orchestrator | Saturday 27 September 2025 22:15:55 +0000 (0:00:14.859) 0:01:02.466 **** 2025-09-27 22:23:47.778500 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:23:47.778507 | orchestrator | 2025-09-27 22:23:47.778514 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-27 22:23:47.778521 | orchestrator | Saturday 27 September 2025 22:16:09 +0000 (0:00:13.927) 0:01:16.394 **** 2025-09-27 22:23:47.778557 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:23:47.778565 | orchestrator | 2025-09-27 22:23:47.778571 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-27 22:23:47.778595 | orchestrator | Saturday 27 September 2025 22:16:10 +0000 (0:00:01.294) 0:01:17.689 **** 2025-09-27 22:23:47.778602 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.778608 | orchestrator | 2025-09-27 22:23:47.778615 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-27 22:23:47.778651 | orchestrator | Saturday 27 September 2025 22:16:11 +0000 (0:00:00.479) 0:01:18.169 **** 2025-09-27 22:23:47.778658 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:23:47.778664 | orchestrator | 2025-09-27 22:23:47.778671 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-27 22:23:47.778677 | orchestrator | Saturday 27 September 2025 22:16:11 +0000 (0:00:00.424) 0:01:18.593 **** 2025-09-27 22:23:47.778684 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:23:47.778690 | orchestrator | 2025-09-27 22:23:47.778697 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-27 22:23:47.778703 | orchestrator | Saturday 27 September 2025 22:16:30 +0000 (0:00:18.774) 0:01:37.368 **** 2025-09-27 22:23:47.778709 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.778715 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.778722 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.778728 | orchestrator | 2025-09-27 22:23:47.778735 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-27 22:23:47.778741 | orchestrator | 2025-09-27 22:23:47.778747 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-27 22:23:47.778753 | orchestrator | Saturday 27 September 2025 22:16:31 +0000 (0:00:00.475) 0:01:37.844 **** 2025-09-27 22:23:47.778760 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:23:47.778766 | orchestrator | 2025-09-27 22:23:47.778779 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-27 22:23:47.778795 | orchestrator | Saturday 27 September 2025 22:16:31 +0000 (0:00:00.571) 0:01:38.415 **** 2025-09-27 22:23:47.778802 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.778809 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.778815 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:47.778821 | orchestrator | 2025-09-27 22:23:47.778842 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-27 22:23:47.778849 | orchestrator | Saturday 27 September 2025 22:16:33 +0000 (0:00:02.173) 0:01:40.589 **** 2025-09-27 22:23:47.778856 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.778863 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.778870 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:47.778876 | orchestrator | 2025-09-27 22:23:47.778883 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-27 22:23:47.778897 | orchestrator | Saturday 27 September 2025 22:16:36 +0000 (0:00:02.346) 0:01:42.936 **** 2025-09-27 22:23:47.778904 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.778911 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.778917 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.778924 | orchestrator | 2025-09-27 22:23:47.778994 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-27 22:23:47.779007 | orchestrator | Saturday 27 September 2025 22:16:36 +0000 (0:00:00.412) 0:01:43.348 **** 2025-09-27 22:23:47.779018 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-27 22:23:47.779028 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.779039 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-27 22:23:47.779049 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.779058 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-27 22:23:47.779069 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-27 22:23:47.779079 | orchestrator | 2025-09-27 22:23:47.779089 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-27 22:23:47.779101 | orchestrator | Saturday 27 September 2025 22:16:45 +0000 (0:00:09.116) 0:01:52.465 **** 2025-09-27 22:23:47.779111 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.779121 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.779132 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.779140 | orchestrator | 2025-09-27 22:23:47.779170 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-27 22:23:47.779184 | orchestrator | Saturday 27 September 2025 22:16:46 +0000 (0:00:00.405) 0:01:52.871 **** 2025-09-27 22:23:47.779191 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-27 22:23:47.779197 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.779203 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-27 22:23:47.779210 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.779216 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-27 22:23:47.779223 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.779229 | orchestrator | 2025-09-27 22:23:47.779235 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-27 22:23:47.779245 | orchestrator | Saturday 27 September 2025 22:16:47 +0000 (0:00:01.281) 0:01:54.153 **** 2025-09-27 22:23:47.779256 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.779266 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:47.779276 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.779291 | orchestrator | 2025-09-27 22:23:47.779303 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-27 22:23:47.779314 | orchestrator | Saturday 27 September 2025 22:16:48 +0000 (0:00:00.748) 0:01:54.901 **** 2025-09-27 22:23:47.779325 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.779336 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.779348 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:47.779360 | orchestrator | 2025-09-27 22:23:47.779371 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-27 22:23:47.779379 | orchestrator | Saturday 27 September 2025 22:16:49 +0000 (0:00:01.011) 0:01:55.912 **** 2025-09-27 22:23:47.779385 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.779392 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.779408 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:47.779414 | orchestrator | 2025-09-27 22:23:47.779420 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-27 22:23:47.779427 | orchestrator | Saturday 27 September 2025 22:16:51 +0000 (0:00:02.440) 0:01:58.353 **** 2025-09-27 22:23:47.779445 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.779451 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.779501 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:23:47.779510 | orchestrator | 2025-09-27 22:23:47.779517 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-27 22:23:47.779523 | orchestrator | Saturday 27 September 2025 22:17:13 +0000 (0:00:22.325) 0:02:20.678 **** 2025-09-27 22:23:47.779529 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.779535 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.779542 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:23:47.779548 | orchestrator | 2025-09-27 22:23:47.779563 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-27 22:23:47.779570 | orchestrator | Saturday 27 September 2025 22:17:29 +0000 (0:00:15.850) 0:02:36.528 **** 2025-09-27 22:23:47.779576 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.779582 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:23:47.779600 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.779606 | orchestrator | 2025-09-27 22:23:47.779613 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-27 22:23:47.779619 | orchestrator | Saturday 27 September 2025 22:17:30 +0000 (0:00:00.988) 0:02:37.517 **** 2025-09-27 22:23:47.779625 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.779631 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.779637 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:47.779644 | orchestrator | 2025-09-27 22:23:47.779650 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-27 22:23:47.779657 | orchestrator | Saturday 27 September 2025 22:17:44 +0000 (0:00:13.875) 0:02:51.392 **** 2025-09-27 22:23:47.779677 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.779701 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.779712 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.779722 | orchestrator | 2025-09-27 22:23:47.779732 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-27 22:23:47.779754 | orchestrator | Saturday 27 September 2025 22:17:45 +0000 (0:00:00.908) 0:02:52.301 **** 2025-09-27 22:23:47.779767 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.779777 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.779788 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.779800 | orchestrator | 2025-09-27 22:23:47.779811 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-27 22:23:47.779822 | orchestrator | 2025-09-27 22:23:47.779829 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-27 22:23:47.779836 | orchestrator | Saturday 27 September 2025 22:17:45 +0000 (0:00:00.438) 0:02:52.740 **** 2025-09-27 22:23:47.779842 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:23:47.779850 | orchestrator | 2025-09-27 22:23:47.779856 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-27 22:23:47.779862 | orchestrator | Saturday 27 September 2025 22:17:46 +0000 (0:00:00.453) 0:02:53.194 **** 2025-09-27 22:23:47.779925 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-27 22:23:47.779932 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-27 22:23:47.779939 | orchestrator | 2025-09-27 22:23:47.779945 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-27 22:23:47.779952 | orchestrator | Saturday 27 September 2025 22:17:49 +0000 (0:00:03.499) 0:02:56.693 **** 2025-09-27 22:23:47.779958 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-27 22:23:47.779965 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-27 22:23:47.779971 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-27 22:23:47.779978 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-27 22:23:47.779985 | orchestrator | 2025-09-27 22:23:47.779991 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-27 22:23:47.779997 | orchestrator | Saturday 27 September 2025 22:17:56 +0000 (0:00:07.060) 0:03:03.753 **** 2025-09-27 22:23:47.780004 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-27 22:23:47.780010 | orchestrator | 2025-09-27 22:23:47.780017 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-27 22:23:47.780023 | orchestrator | Saturday 27 September 2025 22:18:00 +0000 (0:00:03.403) 0:03:07.157 **** 2025-09-27 22:23:47.780030 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-27 22:23:47.780036 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-27 22:23:47.780043 | orchestrator | 2025-09-27 22:23:47.780049 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-27 22:23:47.780055 | orchestrator | Saturday 27 September 2025 22:18:04 +0000 (0:00:04.013) 0:03:11.171 **** 2025-09-27 22:23:47.780062 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-27 22:23:47.780068 | orchestrator | 2025-09-27 22:23:47.780074 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-27 22:23:47.780081 | orchestrator | Saturday 27 September 2025 22:18:07 +0000 (0:00:03.608) 0:03:14.779 **** 2025-09-27 22:23:47.780087 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-27 22:23:47.780094 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-27 22:23:47.780100 | orchestrator | 2025-09-27 22:23:47.780106 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-27 22:23:47.780127 | orchestrator | Saturday 27 September 2025 22:18:16 +0000 (0:00:08.269) 0:03:23.049 **** 2025-09-27 22:23:47.780144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 22:23:47.780154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 22:23:47.780163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 22:23:47.780177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.780191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.780201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.780208 | orchestrator | 2025-09-27 22:23:47.780214 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-27 22:23:47.780221 | orchestrator | Saturday 27 September 2025 22:18:17 +0000 (0:00:01.395) 0:03:24.445 **** 2025-09-27 22:23:47.780227 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.780233 | orchestrator | 2025-09-27 22:23:47.780240 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-27 22:23:47.780246 | orchestrator | Saturday 27 September 2025 22:18:17 +0000 (0:00:00.129) 0:03:24.575 **** 2025-09-27 22:23:47.780252 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.780258 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.780264 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.780270 | orchestrator | 2025-09-27 22:23:47.780276 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-27 22:23:47.780283 | orchestrator | Saturday 27 September 2025 22:18:18 +0000 (0:00:00.281) 0:03:24.856 **** 2025-09-27 22:23:47.780289 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 22:23:47.780295 | orchestrator | 2025-09-27 22:23:47.780301 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-27 22:23:47.780307 | orchestrator | Saturday 27 September 2025 22:18:18 +0000 (0:00:00.849) 0:03:25.706 **** 2025-09-27 22:23:47.780313 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.780320 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.780326 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.780332 | orchestrator | 2025-09-27 22:23:47.780338 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-27 22:23:47.780345 | orchestrator | Saturday 27 September 2025 22:18:19 +0000 (0:00:00.290) 0:03:25.997 **** 2025-09-27 22:23:47.780356 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:23:47.780370 | orchestrator | 2025-09-27 22:23:47.780384 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-27 22:23:47.780394 | orchestrator | Saturday 27 September 2025 22:18:19 +0000 (0:00:00.491) 0:03:26.488 **** 2025-09-27 22:23:47.780406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 22:23:47.780433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 22:23:47.780451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 22:23:47.780517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.780540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.780558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.780569 | orchestrator | 2025-09-27 22:23:47.780580 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-27 22:23:47.780590 | orchestrator | Saturday 27 September 2025 22:18:22 +0000 (0:00:02.435) 0:03:28.925 **** 2025-09-27 22:23:47.780614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 22:23:47.780628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.780636 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.780643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 22:23:47.780656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.780663 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.780676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 22:23:47.780688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.780694 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.780701 | orchestrator | 2025-09-27 22:23:47.780707 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-27 22:23:47.780713 | orchestrator | Saturday 27 September 2025 22:18:22 +0000 (0:00:00.812) 0:03:29.737 **** 2025-09-27 22:23:47.780720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 22:23:47.780732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.780739 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.780752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 22:23:47.780762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.780769 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.780783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 22:23:47.780806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.780817 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.780827 | orchestrator | 2025-09-27 22:23:47.780837 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-27 22:23:47.780847 | orchestrator | Saturday 27 September 2025 22:18:23 +0000 (0:00:00.769) 0:03:30.507 **** 2025-09-27 22:23:47.780865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 22:23:47.780881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 22:23:47.780892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 22:23:47.780911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.780930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.780941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.780952 | orchestrator | 2025-09-27 22:23:47.780963 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-27 22:23:47.780974 | orchestrator | Saturday 27 September 2025 22:18:25 +0000 (0:00:02.251) 0:03:32.759 **** 2025-09-27 22:23:47.780990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 22:23:47.781009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 22:23:47.781028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 22:23:47.781040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.781056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.781067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.781086 | orchestrator | 2025-09-27 22:23:47.781097 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-27 22:23:47.781108 | orchestrator | Saturday 27 September 2025 22:18:31 +0000 (0:00:05.633) 0:03:38.392 **** 2025-09-27 22:23:47.781118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 22:23:47.781137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.781148 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.781193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 22:23:47.781204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.781217 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.781225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 22:23:47.781232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.781238 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.781245 | orchestrator | 2025-09-27 22:23:47.781251 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-27 22:23:47.781257 | orchestrator | Saturday 27 September 2025 22:18:32 +0000 (0:00:00.661) 0:03:39.053 **** 2025-09-27 22:23:47.781264 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:47.781270 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:23:47.781276 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:23:47.781283 | orchestrator | 2025-09-27 22:23:47.781293 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-27 22:23:47.781300 | orchestrator | Saturday 27 September 2025 22:18:33 +0000 (0:00:01.554) 0:03:40.608 **** 2025-09-27 22:23:47.781306 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.781312 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.781318 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.781324 | orchestrator | 2025-09-27 22:23:47.781330 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-27 22:23:47.781337 | orchestrator | Saturday 27 September 2025 22:18:34 +0000 (0:00:00.312) 0:03:40.921 **** 2025-09-27 22:23:47.781347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 22:23:47.781362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 22:23:47.781375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 22:23:47.781382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.781389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.781404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.781411 | orchestrator | 2025-09-27 22:23:47.781417 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-27 22:23:47.781423 | orchestrator | Saturday 27 September 2025 22:18:36 +0000 (0:00:02.113) 0:03:43.034 **** 2025-09-27 22:23:47.781430 | orchestrator | 2025-09-27 22:23:47.781436 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-27 22:23:47.781442 | orchestrator | Saturday 27 September 2025 22:18:36 +0000 (0:00:00.136) 0:03:43.170 **** 2025-09-27 22:23:47.781448 | orchestrator | 2025-09-27 22:23:47.781454 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-27 22:23:47.781483 | orchestrator | Saturday 27 September 2025 22:18:36 +0000 (0:00:00.127) 0:03:43.298 **** 2025-09-27 22:23:47.781489 | orchestrator | 2025-09-27 22:23:47.781496 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-27 22:23:47.781502 | orchestrator | Saturday 27 September 2025 22:18:36 +0000 (0:00:00.132) 0:03:43.430 **** 2025-09-27 22:23:47.781508 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:47.781515 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:23:47.781524 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:23:47.781535 | orchestrator | 2025-09-27 22:23:47.781545 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-27 22:23:47.781556 | orchestrator | Saturday 27 September 2025 22:18:53 +0000 (0:00:16.960) 0:04:00.391 **** 2025-09-27 22:23:47.781566 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:47.781577 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:23:47.781587 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:23:47.781598 | orchestrator | 2025-09-27 22:23:47.781609 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-27 22:23:47.781619 | orchestrator | 2025-09-27 22:23:47.781629 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-27 22:23:47.781637 | orchestrator | Saturday 27 September 2025 22:19:03 +0000 (0:00:10.364) 0:04:10.756 **** 2025-09-27 22:23:47.781644 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:23:47.781650 | orchestrator | 2025-09-27 22:23:47.781656 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-27 22:23:47.781663 | orchestrator | Saturday 27 September 2025 22:19:04 +0000 (0:00:00.992) 0:04:11.748 **** 2025-09-27 22:23:47.781669 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:23:47.781675 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:23:47.781681 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:23:47.781687 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.781693 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.781699 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.781706 | orchestrator | 2025-09-27 22:23:47.781712 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-27 22:23:47.781719 | orchestrator | Saturday 27 September 2025 22:19:05 +0000 (0:00:00.607) 0:04:12.356 **** 2025-09-27 22:23:47.781731 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.781737 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.781743 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.781749 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:23:47.781755 | orchestrator | 2025-09-27 22:23:47.781762 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-27 22:23:47.781773 | orchestrator | Saturday 27 September 2025 22:19:06 +0000 (0:00:01.091) 0:04:13.448 **** 2025-09-27 22:23:47.781780 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-27 22:23:47.781786 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-27 22:23:47.781793 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-27 22:23:47.781800 | orchestrator | 2025-09-27 22:23:47.781806 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-27 22:23:47.781812 | orchestrator | Saturday 27 September 2025 22:19:07 +0000 (0:00:00.731) 0:04:14.179 **** 2025-09-27 22:23:47.781818 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-27 22:23:47.781825 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-27 22:23:47.781831 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-27 22:23:47.781837 | orchestrator | 2025-09-27 22:23:47.781843 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-27 22:23:47.781849 | orchestrator | Saturday 27 September 2025 22:19:08 +0000 (0:00:01.250) 0:04:15.430 **** 2025-09-27 22:23:47.781856 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-27 22:23:47.781862 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:23:47.781868 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-27 22:23:47.781874 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:23:47.781880 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-27 22:23:47.781886 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:23:47.781892 | orchestrator | 2025-09-27 22:23:47.781898 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-27 22:23:47.781905 | orchestrator | Saturday 27 September 2025 22:19:09 +0000 (0:00:00.663) 0:04:16.093 **** 2025-09-27 22:23:47.781911 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-27 22:23:47.781921 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-27 22:23:47.781927 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.781934 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-27 22:23:47.781940 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-27 22:23:47.781946 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.781952 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-27 22:23:47.781958 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-27 22:23:47.781964 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.781971 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-27 22:23:47.781977 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-27 22:23:47.781983 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-27 22:23:47.781989 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-27 22:23:47.781995 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-27 22:23:47.782001 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-27 22:23:47.782007 | orchestrator | 2025-09-27 22:23:47.782013 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-27 22:23:47.782072 | orchestrator | Saturday 27 September 2025 22:19:11 +0000 (0:00:02.016) 0:04:18.110 **** 2025-09-27 22:23:47.782084 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.782091 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.782097 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.782106 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:23:47.782117 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:23:47.782128 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:23:47.782139 | orchestrator | 2025-09-27 22:23:47.782148 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-27 22:23:47.782158 | orchestrator | Saturday 27 September 2025 22:19:12 +0000 (0:00:01.409) 0:04:19.519 **** 2025-09-27 22:23:47.782168 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.782178 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.782187 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.782197 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:23:47.782206 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:23:47.782216 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:23:47.782226 | orchestrator | 2025-09-27 22:23:47.782237 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-27 22:23:47.782247 | orchestrator | Saturday 27 September 2025 22:19:14 +0000 (0:00:02.082) 0:04:21.602 **** 2025-09-27 22:23:47.782259 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 22:23:47.782846 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 22:23:47.782917 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 22:23:47.782927 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 22:23:47.782952 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 22:23:47.782959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 22:23:47.782967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 22:23:47.782984 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.782996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783003 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783021 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783047 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783054 | orchestrator | 2025-09-27 22:23:47.783062 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-27 22:23:47.783073 | orchestrator | Saturday 27 September 2025 22:19:17 +0000 (0:00:02.300) 0:04:23.902 **** 2025-09-27 22:23:47.783085 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:23:47.783091 | orchestrator | 2025-09-27 22:23:47.783098 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-27 22:23:47.783104 | orchestrator | Saturday 27 September 2025 22:19:18 +0000 (0:00:01.006) 0:04:24.909 **** 2025-09-27 22:23:47.783111 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783118 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783136 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783157 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783170 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783189 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783206 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783224 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783231 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.783238 | orchestrator | 2025-09-27 22:23:47.783244 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-27 22:23:47.783251 | orchestrator | Saturday 27 September 2025 22:19:21 +0000 (0:00:03.830) 0:04:28.739 **** 2025-09-27 22:23:47.783261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 22:23:47.783273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 22:23:47.783283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.783290 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:23:47.783297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 22:23:47.783304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 22:23:47.783316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.783323 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:23:47.783329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 22:23:47.783344 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 22:23:47.783351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.783358 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:23:47.783365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-27 22:23:47.783371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.783378 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.783390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-27 22:23:47.783405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.783413 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.783423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-27 22:23:47.783431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.783438 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.783445 | orchestrator | 2025-09-27 22:23:47.783452 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-27 22:23:47.783482 | orchestrator | Saturday 27 September 2025 22:19:23 +0000 (0:00:01.682) 0:04:30.422 **** 2025-09-27 22:23:47.783490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 22:23:47.783497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 22:23:47.783511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.783523 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:23:47.783535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 22:23:47.783542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 22:23:47.783551 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.783562 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:23:47.783573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 22:23:47.783590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 22:23:47.783608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.783619 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:23:47.783633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-27 22:23:47.783644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.783655 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.783666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-27 22:23:47.783677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.783687 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.783698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-27 22:23:47.783724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.783735 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.783745 | orchestrator | 2025-09-27 22:23:47.783757 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-27 22:23:47.783767 | orchestrator | Saturday 27 September 2025 22:19:25 +0000 (0:00:02.367) 0:04:32.790 **** 2025-09-27 22:23:47.783778 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.783788 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.783799 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.783809 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 22:23:47.783819 | orchestrator | 2025-09-27 22:23:47.783830 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-27 22:23:47.783840 | orchestrator | Saturday 27 September 2025 22:19:27 +0000 (0:00:01.198) 0:04:33.988 **** 2025-09-27 22:23:47.783849 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-27 22:23:47.783860 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-27 22:23:47.783871 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-27 22:23:47.783881 | orchestrator | 2025-09-27 22:23:47.783896 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-27 22:23:47.783906 | orchestrator | Saturday 27 September 2025 22:19:28 +0000 (0:00:00.995) 0:04:34.984 **** 2025-09-27 22:23:47.783915 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-27 22:23:47.783925 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-27 22:23:47.783934 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-27 22:23:47.783944 | orchestrator | 2025-09-27 22:23:47.783954 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-27 22:23:47.783963 | orchestrator | Saturday 27 September 2025 22:19:29 +0000 (0:00:00.889) 0:04:35.873 **** 2025-09-27 22:23:47.783974 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:23:47.783984 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:23:47.783995 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:23:47.784005 | orchestrator | 2025-09-27 22:23:47.784016 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-27 22:23:47.784023 | orchestrator | Saturday 27 September 2025 22:19:29 +0000 (0:00:00.479) 0:04:36.353 **** 2025-09-27 22:23:47.784029 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:23:47.784035 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:23:47.784042 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:23:47.784048 | orchestrator | 2025-09-27 22:23:47.784054 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-27 22:23:47.784060 | orchestrator | Saturday 27 September 2025 22:19:30 +0000 (0:00:00.966) 0:04:37.319 **** 2025-09-27 22:23:47.784066 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-27 22:23:47.784073 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-27 22:23:47.784079 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-27 22:23:47.784085 | orchestrator | 2025-09-27 22:23:47.784097 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-27 22:23:47.784104 | orchestrator | Saturday 27 September 2025 22:19:31 +0000 (0:00:01.152) 0:04:38.471 **** 2025-09-27 22:23:47.784110 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-27 22:23:47.784116 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-27 22:23:47.784123 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-27 22:23:47.784129 | orchestrator | 2025-09-27 22:23:47.784135 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-27 22:23:47.784141 | orchestrator | Saturday 27 September 2025 22:19:32 +0000 (0:00:01.176) 0:04:39.647 **** 2025-09-27 22:23:47.784148 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-27 22:23:47.784154 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-27 22:23:47.784160 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-27 22:23:47.784166 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-27 22:23:47.784175 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-27 22:23:47.784185 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-27 22:23:47.784195 | orchestrator | 2025-09-27 22:23:47.784206 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-27 22:23:47.784216 | orchestrator | Saturday 27 September 2025 22:19:36 +0000 (0:00:03.885) 0:04:43.533 **** 2025-09-27 22:23:47.784226 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:23:47.784237 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:23:47.784247 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:23:47.784257 | orchestrator | 2025-09-27 22:23:47.784268 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-27 22:23:47.784278 | orchestrator | Saturday 27 September 2025 22:19:37 +0000 (0:00:00.567) 0:04:44.100 **** 2025-09-27 22:23:47.784289 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:23:47.784299 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:23:47.784310 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:23:47.784321 | orchestrator | 2025-09-27 22:23:47.784332 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-27 22:23:47.784342 | orchestrator | Saturday 27 September 2025 22:19:37 +0000 (0:00:00.303) 0:04:44.404 **** 2025-09-27 22:23:47.784352 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:23:47.784363 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:23:47.784373 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:23:47.784383 | orchestrator | 2025-09-27 22:23:47.784400 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-27 22:23:47.784411 | orchestrator | Saturday 27 September 2025 22:19:38 +0000 (0:00:01.314) 0:04:45.718 **** 2025-09-27 22:23:47.784422 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-27 22:23:47.784433 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-27 22:23:47.784444 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-27 22:23:47.784453 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-27 22:23:47.784506 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-27 22:23:47.784513 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-27 22:23:47.784520 | orchestrator | 2025-09-27 22:23:47.784526 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-27 22:23:47.784532 | orchestrator | Saturday 27 September 2025 22:19:42 +0000 (0:00:03.471) 0:04:49.190 **** 2025-09-27 22:23:47.784545 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-27 22:23:47.784551 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-27 22:23:47.784562 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-27 22:23:47.784568 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-27 22:23:47.784575 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:23:47.784581 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-27 22:23:47.784587 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:23:47.784593 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-27 22:23:47.784602 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:23:47.784613 | orchestrator | 2025-09-27 22:23:47.784623 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-27 22:23:47.784633 | orchestrator | Saturday 27 September 2025 22:19:46 +0000 (0:00:03.929) 0:04:53.120 **** 2025-09-27 22:23:47.784643 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:23:47.784653 | orchestrator | 2025-09-27 22:23:47.784664 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-27 22:23:47.784674 | orchestrator | Saturday 27 September 2025 22:19:46 +0000 (0:00:00.152) 0:04:53.273 **** 2025-09-27 22:23:47.784684 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:23:47.784694 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:23:47.784703 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:23:47.784712 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.784723 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.784733 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.784744 | orchestrator | 2025-09-27 22:23:47.784754 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-27 22:23:47.784764 | orchestrator | Saturday 27 September 2025 22:19:47 +0000 (0:00:00.590) 0:04:53.863 **** 2025-09-27 22:23:47.784774 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-27 22:23:47.784784 | orchestrator | 2025-09-27 22:23:47.784795 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-27 22:23:47.784805 | orchestrator | Saturday 27 September 2025 22:19:47 +0000 (0:00:00.670) 0:04:54.533 **** 2025-09-27 22:23:47.784815 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:23:47.784826 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:23:47.784836 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:23:47.784845 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.784852 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.784858 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.784864 | orchestrator | 2025-09-27 22:23:47.784871 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-27 22:23:47.784877 | orchestrator | Saturday 27 September 2025 22:19:48 +0000 (0:00:00.747) 0:04:55.281 **** 2025-09-27 22:23:47.784884 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 22:23:47.784898 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 22:23:47.784912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 22:23:47.784923 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 22:23:47.784930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 22:23:47.784937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 22:23:47.784944 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 22:23:47.784955 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 22:23:47.784968 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 22:23:47.784981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.784992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.785003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.785014 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.785031 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.785053 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.785064 | orchestrator | 2025-09-27 22:23:47.785075 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-27 22:23:47.785083 | orchestrator | Saturday 27 September 2025 22:19:52 +0000 (0:00:04.261) 0:04:59.543 **** 2025-09-27 22:23:47.785098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 22:23:47.785110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 22:23:47.785121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 22:23:47.785133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 22:23:47.785156 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 22:23:47.785172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 22:23:47.785184 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.785195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 22:23:47.785206 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.785301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 22:23:47.785316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 22:23:47.785333 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.785344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.785355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.785366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.785384 | orchestrator | 2025-09-27 22:23:47.785394 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-27 22:23:47.785405 | orchestrator | Saturday 27 September 2025 22:20:01 +0000 (0:00:08.480) 0:05:08.023 **** 2025-09-27 22:23:47.785416 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:23:47.785426 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:23:47.785436 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:23:47.785446 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.785452 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.785498 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.785504 | orchestrator | 2025-09-27 22:23:47.785511 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-27 22:23:47.785517 | orchestrator | Saturday 27 September 2025 22:20:02 +0000 (0:00:01.179) 0:05:09.202 **** 2025-09-27 22:23:47.785523 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-27 22:23:47.785530 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-27 22:23:47.785536 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-27 22:23:47.785542 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-27 22:23:47.785554 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-27 22:23:47.785560 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-27 22:23:47.785567 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-27 22:23:47.785573 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.785579 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-27 22:23:47.785586 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.785592 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-27 22:23:47.785598 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.785604 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-27 22:23:47.785610 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-27 22:23:47.785617 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-27 22:23:47.785623 | orchestrator | 2025-09-27 22:23:47.785629 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-27 22:23:47.785635 | orchestrator | Saturday 27 September 2025 22:20:05 +0000 (0:00:03.538) 0:05:12.740 **** 2025-09-27 22:23:47.785641 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:23:47.785648 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:23:47.785654 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:23:47.785660 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.785666 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.785672 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.785679 | orchestrator | 2025-09-27 22:23:47.785689 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-27 22:23:47.785696 | orchestrator | Saturday 27 September 2025 22:20:06 +0000 (0:00:00.508) 0:05:13.249 **** 2025-09-27 22:23:47.785702 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-27 22:23:47.785709 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-27 22:23:47.785715 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-27 22:23:47.785721 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-27 22:23:47.785733 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-27 22:23:47.785739 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-27 22:23:47.785745 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-27 22:23:47.785751 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-27 22:23:47.785757 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.785764 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-27 22:23:47.785770 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-27 22:23:47.785776 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-27 22:23:47.785783 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.785789 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-27 22:23:47.785795 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.785801 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-27 22:23:47.785808 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-27 22:23:47.785814 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-27 22:23:47.785820 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-27 22:23:47.785826 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-27 22:23:47.785832 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-27 22:23:47.785839 | orchestrator | 2025-09-27 22:23:47.785845 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-27 22:23:47.785851 | orchestrator | Saturday 27 September 2025 22:20:10 +0000 (0:00:04.439) 0:05:17.689 **** 2025-09-27 22:23:47.785858 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-27 22:23:47.785864 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-27 22:23:47.785873 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-27 22:23:47.785880 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-27 22:23:47.785886 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-27 22:23:47.785892 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-27 22:23:47.785898 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-27 22:23:47.785904 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-27 22:23:47.785910 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-27 22:23:47.785916 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-27 22:23:47.785922 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-27 22:23:47.785929 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-27 22:23:47.785935 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-27 22:23:47.785949 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-27 22:23:47.785955 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.785961 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-27 22:23:47.785968 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.785977 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-27 22:23:47.785983 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-27 22:23:47.785989 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.785995 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-27 22:23:47.786001 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-27 22:23:47.786007 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-27 22:23:47.786038 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-27 22:23:47.786045 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-27 22:23:47.786051 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-27 22:23:47.786057 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-27 22:23:47.786066 | orchestrator | 2025-09-27 22:23:47.786072 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-27 22:23:47.786079 | orchestrator | Saturday 27 September 2025 22:20:17 +0000 (0:00:06.785) 0:05:24.475 **** 2025-09-27 22:23:47.786085 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:23:47.786091 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:23:47.786098 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:23:47.786104 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.786110 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.786116 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.786122 | orchestrator | 2025-09-27 22:23:47.786129 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-27 22:23:47.786135 | orchestrator | Saturday 27 September 2025 22:20:18 +0000 (0:00:00.642) 0:05:25.117 **** 2025-09-27 22:23:47.786141 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:23:47.786147 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:23:47.786153 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:23:47.786159 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.786165 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.786171 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.786177 | orchestrator | 2025-09-27 22:23:47.786183 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-27 22:23:47.786190 | orchestrator | Saturday 27 September 2025 22:20:18 +0000 (0:00:00.516) 0:05:25.633 **** 2025-09-27 22:23:47.786196 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.786202 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:23:47.786208 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.786214 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.786220 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:23:47.786226 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:23:47.786232 | orchestrator | 2025-09-27 22:23:47.786239 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-27 22:23:47.786245 | orchestrator | Saturday 27 September 2025 22:20:21 +0000 (0:00:02.303) 0:05:27.937 **** 2025-09-27 22:23:47.786257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 22:23:47.786270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 22:23:47.786280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.786287 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:23:47.786294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 22:23:47.786300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 22:23:47.786307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.786318 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:23:47.786329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-27 22:23:47.786335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.786345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 22:23:47.786352 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.786359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 22:23:47.786365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.786372 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:23:47.786385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-27 22:23:47.786397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.786404 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.786411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-27 22:23:47.786421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 22:23:47.786429 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.786435 | orchestrator | 2025-09-27 22:23:47.786442 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-27 22:23:47.786448 | orchestrator | Saturday 27 September 2025 22:20:22 +0000 (0:00:01.691) 0:05:29.628 **** 2025-09-27 22:23:47.786455 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-27 22:23:47.786474 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-27 22:23:47.786481 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:23:47.786487 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-27 22:23:47.786493 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-27 22:23:47.786499 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:23:47.786505 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-27 22:23:47.786511 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-27 22:23:47.786517 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:23:47.786523 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-27 22:23:47.786530 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-27 22:23:47.786536 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.786542 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-27 22:23:47.786548 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-27 22:23:47.786555 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.786566 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-27 22:23:47.786573 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-27 22:23:47.786579 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.786585 | orchestrator | 2025-09-27 22:23:47.786591 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-27 22:23:47.786598 | orchestrator | Saturday 27 September 2025 22:20:23 +0000 (0:00:00.778) 0:05:30.406 **** 2025-09-27 22:23:47.786604 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 22:23:47.786615 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 22:23:47.786622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 22:23:47.786632 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 22:23:47.786639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 22:23:47.786649 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 22:23:47.786656 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 22:23:47.786667 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 22:23:47.786674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.786683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 22:23:47.786690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.786697 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.786708 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.786720 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.786727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 22:23:47.786733 | orchestrator | 2025-09-27 22:23:47.786739 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-27 22:23:47.786746 | orchestrator | Saturday 27 September 2025 22:20:26 +0000 (0:00:03.291) 0:05:33.698 **** 2025-09-27 22:23:47.786752 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:23:47.786758 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:23:47.786764 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:23:47.786771 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.786781 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.786787 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.786794 | orchestrator | 2025-09-27 22:23:47.786800 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-27 22:23:47.786807 | orchestrator | Saturday 27 September 2025 22:20:27 +0000 (0:00:00.782) 0:05:34.480 **** 2025-09-27 22:23:47.786813 | orchestrator | 2025-09-27 22:23:47.786819 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-27 22:23:47.786830 | orchestrator | Saturday 27 September 2025 22:20:27 +0000 (0:00:00.131) 0:05:34.612 **** 2025-09-27 22:23:47.786837 | orchestrator | 2025-09-27 22:23:47.786843 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-27 22:23:47.786849 | orchestrator | Saturday 27 September 2025 22:20:27 +0000 (0:00:00.123) 0:05:34.735 **** 2025-09-27 22:23:47.786855 | orchestrator | 2025-09-27 22:23:47.786862 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-27 22:23:47.786868 | orchestrator | Saturday 27 September 2025 22:20:28 +0000 (0:00:00.191) 0:05:34.927 **** 2025-09-27 22:23:47.786874 | orchestrator | 2025-09-27 22:23:47.786880 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-27 22:23:47.786886 | orchestrator | Saturday 27 September 2025 22:20:28 +0000 (0:00:00.122) 0:05:35.049 **** 2025-09-27 22:23:47.786893 | orchestrator | 2025-09-27 22:23:47.786899 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-27 22:23:47.786906 | orchestrator | Saturday 27 September 2025 22:20:28 +0000 (0:00:00.115) 0:05:35.165 **** 2025-09-27 22:23:47.786912 | orchestrator | 2025-09-27 22:23:47.786918 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-27 22:23:47.786925 | orchestrator | Saturday 27 September 2025 22:20:28 +0000 (0:00:00.212) 0:05:35.377 **** 2025-09-27 22:23:47.786931 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:23:47.786938 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:47.786944 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:23:47.786950 | orchestrator | 2025-09-27 22:23:47.786956 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-27 22:23:47.786962 | orchestrator | Saturday 27 September 2025 22:20:41 +0000 (0:00:13.376) 0:05:48.754 **** 2025-09-27 22:23:47.786969 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:47.786975 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:23:47.786981 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:23:47.786988 | orchestrator | 2025-09-27 22:23:47.786994 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-27 22:23:47.787000 | orchestrator | Saturday 27 September 2025 22:21:02 +0000 (0:00:20.982) 0:06:09.736 **** 2025-09-27 22:23:47.787007 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:23:47.787013 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:23:47.787019 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:23:47.787025 | orchestrator | 2025-09-27 22:23:47.787031 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-27 22:23:47.787037 | orchestrator | Saturday 27 September 2025 22:21:29 +0000 (0:00:26.577) 0:06:36.314 **** 2025-09-27 22:23:47.787043 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:23:47.787050 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:23:47.787056 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:23:47.787062 | orchestrator | 2025-09-27 22:23:47.787069 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-27 22:23:47.787075 | orchestrator | Saturday 27 September 2025 22:22:06 +0000 (0:00:36.677) 0:07:12.992 **** 2025-09-27 22:23:47.787081 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:23:47.787087 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:23:47.787093 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:23:47.787099 | orchestrator | 2025-09-27 22:23:47.787106 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-27 22:23:47.787112 | orchestrator | Saturday 27 September 2025 22:22:07 +0000 (0:00:01.138) 0:07:14.130 **** 2025-09-27 22:23:47.787118 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:23:47.787125 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:23:47.787131 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:23:47.787137 | orchestrator | 2025-09-27 22:23:47.787143 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-27 22:23:47.787153 | orchestrator | Saturday 27 September 2025 22:22:08 +0000 (0:00:00.843) 0:07:14.974 **** 2025-09-27 22:23:47.787159 | orchestrator | changed: [testbed-node-4] 2025-09-27 22:23:47.787170 | orchestrator | changed: [testbed-node-3] 2025-09-27 22:23:47.787177 | orchestrator | changed: [testbed-node-5] 2025-09-27 22:23:47.787183 | orchestrator | 2025-09-27 22:23:47.787190 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-27 22:23:47.787196 | orchestrator | Saturday 27 September 2025 22:22:38 +0000 (0:00:29.989) 0:07:44.963 **** 2025-09-27 22:23:47.787202 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:23:47.787208 | orchestrator | 2025-09-27 22:23:47.787214 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-27 22:23:47.787220 | orchestrator | Saturday 27 September 2025 22:22:38 +0000 (0:00:00.126) 0:07:45.090 **** 2025-09-27 22:23:47.787226 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:23:47.787233 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.787239 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:23:47.787245 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.787251 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.787257 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-27 22:23:47.787264 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-27 22:23:47.787270 | orchestrator | 2025-09-27 22:23:47.787276 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-27 22:23:47.787282 | orchestrator | Saturday 27 September 2025 22:22:59 +0000 (0:00:20.875) 0:08:05.965 **** 2025-09-27 22:23:47.787289 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.787295 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:23:47.787301 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:23:47.787307 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:23:47.787313 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.787323 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.787330 | orchestrator | 2025-09-27 22:23:47.787336 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-27 22:23:47.787342 | orchestrator | Saturday 27 September 2025 22:23:07 +0000 (0:00:08.447) 0:08:14.413 **** 2025-09-27 22:23:47.787349 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:23:47.787355 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:23:47.787361 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.787367 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.787373 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.787380 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-09-27 22:23:47.787386 | orchestrator | 2025-09-27 22:23:47.787392 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-27 22:23:47.787398 | orchestrator | Saturday 27 September 2025 22:23:11 +0000 (0:00:03.688) 0:08:18.101 **** 2025-09-27 22:23:47.787405 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-27 22:23:47.787411 | orchestrator | 2025-09-27 22:23:47.787417 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-27 22:23:47.787423 | orchestrator | Saturday 27 September 2025 22:23:24 +0000 (0:00:13.095) 0:08:31.197 **** 2025-09-27 22:23:47.787429 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-27 22:23:47.787436 | orchestrator | 2025-09-27 22:23:47.787442 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-27 22:23:47.787448 | orchestrator | Saturday 27 September 2025 22:23:25 +0000 (0:00:01.325) 0:08:32.523 **** 2025-09-27 22:23:47.787454 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:23:47.787480 | orchestrator | 2025-09-27 22:23:47.787487 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-27 22:23:47.787493 | orchestrator | Saturday 27 September 2025 22:23:27 +0000 (0:00:01.308) 0:08:33.831 **** 2025-09-27 22:23:47.787500 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-27 22:23:47.787506 | orchestrator | 2025-09-27 22:23:47.787512 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-27 22:23:47.787523 | orchestrator | Saturday 27 September 2025 22:23:38 +0000 (0:00:11.514) 0:08:45.345 **** 2025-09-27 22:23:47.787529 | orchestrator | ok: [testbed-node-3] 2025-09-27 22:23:47.787535 | orchestrator | ok: [testbed-node-4] 2025-09-27 22:23:47.787541 | orchestrator | ok: [testbed-node-5] 2025-09-27 22:23:47.787547 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:23:47.787554 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:23:47.787560 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:23:47.787566 | orchestrator | 2025-09-27 22:23:47.787572 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-27 22:23:47.787578 | orchestrator | 2025-09-27 22:23:47.787584 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-27 22:23:47.787590 | orchestrator | Saturday 27 September 2025 22:23:40 +0000 (0:00:01.779) 0:08:47.124 **** 2025-09-27 22:23:47.787596 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:23:47.787602 | orchestrator | changed: [testbed-node-1] 2025-09-27 22:23:47.787609 | orchestrator | changed: [testbed-node-2] 2025-09-27 22:23:47.787615 | orchestrator | 2025-09-27 22:23:47.787621 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-27 22:23:47.787627 | orchestrator | 2025-09-27 22:23:47.787633 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-27 22:23:47.787639 | orchestrator | Saturday 27 September 2025 22:23:41 +0000 (0:00:01.199) 0:08:48.324 **** 2025-09-27 22:23:47.787646 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.787652 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.787658 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.787664 | orchestrator | 2025-09-27 22:23:47.787670 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-27 22:23:47.787677 | orchestrator | 2025-09-27 22:23:47.787683 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-27 22:23:47.787689 | orchestrator | Saturday 27 September 2025 22:23:42 +0000 (0:00:00.497) 0:08:48.821 **** 2025-09-27 22:23:47.787695 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-27 22:23:47.787706 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-27 22:23:47.787712 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-27 22:23:47.787718 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-27 22:23:47.787724 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-27 22:23:47.787731 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-27 22:23:47.787737 | orchestrator | skipping: [testbed-node-3] 2025-09-27 22:23:47.787743 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-27 22:23:47.787749 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-27 22:23:47.787755 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-27 22:23:47.787761 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-27 22:23:47.787768 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-27 22:23:47.787775 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-27 22:23:47.787781 | orchestrator | skipping: [testbed-node-4] 2025-09-27 22:23:47.787787 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-27 22:23:47.787793 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-27 22:23:47.787799 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-27 22:23:47.787806 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-27 22:23:47.787812 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-27 22:23:47.787818 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-27 22:23:47.787824 | orchestrator | skipping: [testbed-node-5] 2025-09-27 22:23:47.787830 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-27 22:23:47.787844 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-27 22:23:47.787851 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-27 22:23:47.787857 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-27 22:23:47.787864 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-27 22:23:47.787870 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-27 22:23:47.787876 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.787882 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-27 22:23:47.787889 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-27 22:23:47.787895 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-27 22:23:47.787901 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-27 22:23:47.787907 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-27 22:23:47.787914 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-27 22:23:47.787920 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.787926 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-27 22:23:47.787933 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-27 22:23:47.787939 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-27 22:23:47.787945 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-27 22:23:47.787951 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-27 22:23:47.787957 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-27 22:23:47.787963 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.787969 | orchestrator | 2025-09-27 22:23:47.787976 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-27 22:23:47.787982 | orchestrator | 2025-09-27 22:23:47.787988 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-27 22:23:47.787994 | orchestrator | Saturday 27 September 2025 22:23:43 +0000 (0:00:01.259) 0:08:50.081 **** 2025-09-27 22:23:47.788000 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-27 22:23:47.788006 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-27 22:23:47.788012 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.788019 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-27 22:23:47.788025 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-27 22:23:47.788031 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.788037 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-27 22:23:47.788043 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-27 22:23:47.788049 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.788056 | orchestrator | 2025-09-27 22:23:47.788062 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-27 22:23:47.788068 | orchestrator | 2025-09-27 22:23:47.788074 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-27 22:23:47.788080 | orchestrator | Saturday 27 September 2025 22:23:43 +0000 (0:00:00.711) 0:08:50.792 **** 2025-09-27 22:23:47.788086 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.788092 | orchestrator | 2025-09-27 22:23:47.788099 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-27 22:23:47.788105 | orchestrator | 2025-09-27 22:23:47.788111 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-27 22:23:47.788117 | orchestrator | Saturday 27 September 2025 22:23:44 +0000 (0:00:00.643) 0:08:51.436 **** 2025-09-27 22:23:47.788123 | orchestrator | skipping: [testbed-node-0] 2025-09-27 22:23:47.788129 | orchestrator | skipping: [testbed-node-1] 2025-09-27 22:23:47.788135 | orchestrator | skipping: [testbed-node-2] 2025-09-27 22:23:47.788141 | orchestrator | 2025-09-27 22:23:47.788148 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:23:47.788160 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:23:47.788170 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-27 22:23:47.788177 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-27 22:23:47.788184 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-27 22:23:47.788190 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-27 22:23:47.788196 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-09-27 22:23:47.788202 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-27 22:23:47.788208 | orchestrator | 2025-09-27 22:23:47.788214 | orchestrator | 2025-09-27 22:23:47.788221 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:23:47.788227 | orchestrator | Saturday 27 September 2025 22:23:45 +0000 (0:00:00.442) 0:08:51.879 **** 2025-09-27 22:23:47.788233 | orchestrator | =============================================================================== 2025-09-27 22:23:47.788240 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 36.68s 2025-09-27 22:23:47.788252 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.26s 2025-09-27 22:23:47.788259 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 29.99s 2025-09-27 22:23:47.788265 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 26.58s 2025-09-27 22:23:47.788271 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.33s 2025-09-27 22:23:47.788277 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 20.98s 2025-09-27 22:23:47.788283 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 20.88s 2025-09-27 22:23:47.788290 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.78s 2025-09-27 22:23:47.788296 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 16.96s 2025-09-27 22:23:47.788302 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.85s 2025-09-27 22:23:47.788308 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.86s 2025-09-27 22:23:47.788314 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.93s 2025-09-27 22:23:47.788320 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.88s 2025-09-27 22:23:47.788327 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 13.38s 2025-09-27 22:23:47.788333 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.10s 2025-09-27 22:23:47.788339 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.51s 2025-09-27 22:23:47.788345 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.36s 2025-09-27 22:23:47.788351 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.12s 2025-09-27 22:23:47.788357 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 8.48s 2025-09-27 22:23:47.788363 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.45s 2025-09-27 22:23:47.788370 | orchestrator | 2025-09-27 22:23:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:23:50.824590 | orchestrator | 2025-09-27 22:23:50 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:23:50.824680 | orchestrator | 2025-09-27 22:23:50 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:23:53.866323 | orchestrator | 2025-09-27 22:23:53 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:23:53.866433 | orchestrator | 2025-09-27 22:23:53 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:23:56.905888 | orchestrator | 2025-09-27 22:23:56 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:23:56.905993 | orchestrator | 2025-09-27 22:23:56 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:23:59.953548 | orchestrator | 2025-09-27 22:23:59 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:23:59.953632 | orchestrator | 2025-09-27 22:23:59 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:24:02.999578 | orchestrator | 2025-09-27 22:24:02 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:24:02.999700 | orchestrator | 2025-09-27 22:24:02 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:24:06.040963 | orchestrator | 2025-09-27 22:24:06 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:24:06.041081 | orchestrator | 2025-09-27 22:24:06 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:24:09.091413 | orchestrator | 2025-09-27 22:24:09 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:24:09.091535 | orchestrator | 2025-09-27 22:24:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:24:12.134653 | orchestrator | 2025-09-27 22:24:12 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:24:12.134763 | orchestrator | 2025-09-27 22:24:12 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:24:15.180034 | orchestrator | 2025-09-27 22:24:15 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:24:15.180119 | orchestrator | 2025-09-27 22:24:15 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:24:18.228700 | orchestrator | 2025-09-27 22:24:18 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:24:18.228807 | orchestrator | 2025-09-27 22:24:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:24:21.274171 | orchestrator | 2025-09-27 22:24:21 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:24:21.274250 | orchestrator | 2025-09-27 22:24:21 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:24:24.315659 | orchestrator | 2025-09-27 22:24:24 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:24:24.315783 | orchestrator | 2025-09-27 22:24:24 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:24:27.356978 | orchestrator | 2025-09-27 22:24:27 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:24:27.357080 | orchestrator | 2025-09-27 22:24:27 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:24:30.407009 | orchestrator | 2025-09-27 22:24:30 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:24:30.407130 | orchestrator | 2025-09-27 22:24:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:24:33.454794 | orchestrator | 2025-09-27 22:24:33 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:24:33.454898 | orchestrator | 2025-09-27 22:24:33 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:24:36.504612 | orchestrator | 2025-09-27 22:24:36 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:24:36.504723 | orchestrator | 2025-09-27 22:24:36 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:24:39.557744 | orchestrator | 2025-09-27 22:24:39 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:24:39.557895 | orchestrator | 2025-09-27 22:24:39 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:24:42.596264 | orchestrator | 2025-09-27 22:24:42 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:24:42.596350 | orchestrator | 2025-09-27 22:24:42 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:24:45.636997 | orchestrator | 2025-09-27 22:24:45 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:24:45.637111 | orchestrator | 2025-09-27 22:24:45 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:24:48.686162 | orchestrator | 2025-09-27 22:24:48 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:24:48.686275 | orchestrator | 2025-09-27 22:24:48 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:24:51.732901 | orchestrator | 2025-09-27 22:24:51 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:24:51.733044 | orchestrator | 2025-09-27 22:24:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:24:54.773819 | orchestrator | 2025-09-27 22:24:54 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:24:54.773899 | orchestrator | 2025-09-27 22:24:54 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:24:57.819549 | orchestrator | 2025-09-27 22:24:57 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:24:57.819663 | orchestrator | 2025-09-27 22:24:57 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:25:00.857171 | orchestrator | 2025-09-27 22:25:00 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:25:00.857264 | orchestrator | 2025-09-27 22:25:00 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:25:03.897766 | orchestrator | 2025-09-27 22:25:03 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:25:03.897882 | orchestrator | 2025-09-27 22:25:03 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:25:06.949764 | orchestrator | 2025-09-27 22:25:06 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:25:06.949837 | orchestrator | 2025-09-27 22:25:06 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:25:09.992897 | orchestrator | 2025-09-27 22:25:09 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:25:09.993003 | orchestrator | 2025-09-27 22:25:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:25:13.041568 | orchestrator | 2025-09-27 22:25:13 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:25:13.041682 | orchestrator | 2025-09-27 22:25:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:25:16.079793 | orchestrator | 2025-09-27 22:25:16 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:25:16.079880 | orchestrator | 2025-09-27 22:25:16 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:25:19.122595 | orchestrator | 2025-09-27 22:25:19 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:25:19.122678 | orchestrator | 2025-09-27 22:25:19 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:25:22.168706 | orchestrator | 2025-09-27 22:25:22 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:25:22.168782 | orchestrator | 2025-09-27 22:25:22 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:25:25.211798 | orchestrator | 2025-09-27 22:25:25 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:25:25.211907 | orchestrator | 2025-09-27 22:25:25 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:25:28.255086 | orchestrator | 2025-09-27 22:25:28 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:25:28.255341 | orchestrator | 2025-09-27 22:25:28 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:25:31.297906 | orchestrator | 2025-09-27 22:25:31 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:25:31.297996 | orchestrator | 2025-09-27 22:25:31 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:25:34.343572 | orchestrator | 2025-09-27 22:25:34 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:25:34.343682 | orchestrator | 2025-09-27 22:25:34 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:25:37.389575 | orchestrator | 2025-09-27 22:25:37 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:25:37.389668 | orchestrator | 2025-09-27 22:25:37 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:25:40.433699 | orchestrator | 2025-09-27 22:25:40 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:25:40.433836 | orchestrator | 2025-09-27 22:25:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:25:43.477437 | orchestrator | 2025-09-27 22:25:43 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:25:43.477547 | orchestrator | 2025-09-27 22:25:43 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:25:46.520007 | orchestrator | 2025-09-27 22:25:46 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:25:46.520147 | orchestrator | 2025-09-27 22:25:46 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:25:49.560509 | orchestrator | 2025-09-27 22:25:49 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:25:49.560621 | orchestrator | 2025-09-27 22:25:49 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:25:52.604540 | orchestrator | 2025-09-27 22:25:52 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:25:52.604630 | orchestrator | 2025-09-27 22:25:52 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:25:55.645649 | orchestrator | 2025-09-27 22:25:55 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:25:55.645728 | orchestrator | 2025-09-27 22:25:55 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:25:58.693585 | orchestrator | 2025-09-27 22:25:58 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:25:58.693697 | orchestrator | 2025-09-27 22:25:58 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:26:01.734466 | orchestrator | 2025-09-27 22:26:01 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:26:01.734595 | orchestrator | 2025-09-27 22:26:01 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:26:04.782829 | orchestrator | 2025-09-27 22:26:04 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:26:04.782954 | orchestrator | 2025-09-27 22:26:04 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:26:07.829813 | orchestrator | 2025-09-27 22:26:07 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:26:07.829918 | orchestrator | 2025-09-27 22:26:07 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:26:10.871693 | orchestrator | 2025-09-27 22:26:10 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:26:10.871800 | orchestrator | 2025-09-27 22:26:10 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:26:13.914462 | orchestrator | 2025-09-27 22:26:13 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:26:13.914552 | orchestrator | 2025-09-27 22:26:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:26:16.958445 | orchestrator | 2025-09-27 22:26:16 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:26:16.958581 | orchestrator | 2025-09-27 22:26:16 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:26:19.999872 | orchestrator | 2025-09-27 22:26:19 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:26:19.999965 | orchestrator | 2025-09-27 22:26:19 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:26:23.049429 | orchestrator | 2025-09-27 22:26:23 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:26:23.049533 | orchestrator | 2025-09-27 22:26:23 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:26:26.093447 | orchestrator | 2025-09-27 22:26:26 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:26:26.093561 | orchestrator | 2025-09-27 22:26:26 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:26:29.140057 | orchestrator | 2025-09-27 22:26:29 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:26:29.140171 | orchestrator | 2025-09-27 22:26:29 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:26:32.183574 | orchestrator | 2025-09-27 22:26:32 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:26:32.183685 | orchestrator | 2025-09-27 22:26:32 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:26:35.220199 | orchestrator | 2025-09-27 22:26:35 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:26:35.220337 | orchestrator | 2025-09-27 22:26:35 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:26:38.267845 | orchestrator | 2025-09-27 22:26:38 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:26:38.267962 | orchestrator | 2025-09-27 22:26:38 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:26:41.314901 | orchestrator | 2025-09-27 22:26:41 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:26:41.315032 | orchestrator | 2025-09-27 22:26:41 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:26:44.359033 | orchestrator | 2025-09-27 22:26:44 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:26:44.359138 | orchestrator | 2025-09-27 22:26:44 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:26:47.406856 | orchestrator | 2025-09-27 22:26:47 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:26:47.406975 | orchestrator | 2025-09-27 22:26:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:26:50.445029 | orchestrator | 2025-09-27 22:26:50 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:26:50.445169 | orchestrator | 2025-09-27 22:26:50 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:26:53.493760 | orchestrator | 2025-09-27 22:26:53 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:26:53.493907 | orchestrator | 2025-09-27 22:26:53 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:26:56.534587 | orchestrator | 2025-09-27 22:26:56 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:26:56.534730 | orchestrator | 2025-09-27 22:26:56 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:26:59.580918 | orchestrator | 2025-09-27 22:26:59 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:26:59.581034 | orchestrator | 2025-09-27 22:26:59 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:27:02.627140 | orchestrator | 2025-09-27 22:27:02 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:27:02.627482 | orchestrator | 2025-09-27 22:27:02 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:27:05.667560 | orchestrator | 2025-09-27 22:27:05 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:27:05.667694 | orchestrator | 2025-09-27 22:27:05 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:27:08.713684 | orchestrator | 2025-09-27 22:27:08 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:27:08.713810 | orchestrator | 2025-09-27 22:27:08 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:27:11.758608 | orchestrator | 2025-09-27 22:27:11 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:27:11.758727 | orchestrator | 2025-09-27 22:27:11 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:27:14.801727 | orchestrator | 2025-09-27 22:27:14 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:27:14.801839 | orchestrator | 2025-09-27 22:27:14 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:27:17.848490 | orchestrator | 2025-09-27 22:27:17 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:27:17.848584 | orchestrator | 2025-09-27 22:27:17 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:27:20.892805 | orchestrator | 2025-09-27 22:27:20 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:27:20.892954 | orchestrator | 2025-09-27 22:27:20 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:27:23.936835 | orchestrator | 2025-09-27 22:27:23 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:27:23.938522 | orchestrator | 2025-09-27 22:27:23 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:27:26.987312 | orchestrator | 2025-09-27 22:27:26 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:27:26.987454 | orchestrator | 2025-09-27 22:27:26 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:27:30.034406 | orchestrator | 2025-09-27 22:27:30 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:27:30.034590 | orchestrator | 2025-09-27 22:27:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:27:33.079153 | orchestrator | 2025-09-27 22:27:33 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:27:33.079280 | orchestrator | 2025-09-27 22:27:33 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:27:36.120920 | orchestrator | 2025-09-27 22:27:36 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:27:36.121038 | orchestrator | 2025-09-27 22:27:36 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:27:39.162821 | orchestrator | 2025-09-27 22:27:39 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:27:39.162925 | orchestrator | 2025-09-27 22:27:39 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:27:42.201917 | orchestrator | 2025-09-27 22:27:42 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:27:42.202055 | orchestrator | 2025-09-27 22:27:42 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:27:45.248521 | orchestrator | 2025-09-27 22:27:45 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:27:45.248651 | orchestrator | 2025-09-27 22:27:45 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:27:48.298285 | orchestrator | 2025-09-27 22:27:48 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:27:48.298395 | orchestrator | 2025-09-27 22:27:48 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:27:51.349278 | orchestrator | 2025-09-27 22:27:51 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:27:51.349385 | orchestrator | 2025-09-27 22:27:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:27:54.396920 | orchestrator | 2025-09-27 22:27:54 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:27:54.397041 | orchestrator | 2025-09-27 22:27:54 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:27:57.442610 | orchestrator | 2025-09-27 22:27:57 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:27:57.442697 | orchestrator | 2025-09-27 22:27:57 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:28:00.488974 | orchestrator | 2025-09-27 22:28:00 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:28:00.489077 | orchestrator | 2025-09-27 22:28:00 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:28:03.536227 | orchestrator | 2025-09-27 22:28:03 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:28:03.536334 | orchestrator | 2025-09-27 22:28:03 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:28:06.579015 | orchestrator | 2025-09-27 22:28:06 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:28:06.579132 | orchestrator | 2025-09-27 22:28:06 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:28:09.626735 | orchestrator | 2025-09-27 22:28:09 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:28:09.626832 | orchestrator | 2025-09-27 22:28:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:28:12.669674 | orchestrator | 2025-09-27 22:28:12 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:28:12.669783 | orchestrator | 2025-09-27 22:28:12 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:28:15.713637 | orchestrator | 2025-09-27 22:28:15 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:28:15.714839 | orchestrator | 2025-09-27 22:28:15 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:28:18.759299 | orchestrator | 2025-09-27 22:28:18 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:28:18.759392 | orchestrator | 2025-09-27 22:28:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:28:21.799343 | orchestrator | 2025-09-27 22:28:21 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:28:21.799448 | orchestrator | 2025-09-27 22:28:21 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:28:24.844531 | orchestrator | 2025-09-27 22:28:24 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:28:24.844701 | orchestrator | 2025-09-27 22:28:24 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:28:27.892449 | orchestrator | 2025-09-27 22:28:27 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:28:27.893411 | orchestrator | 2025-09-27 22:28:27 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:28:30.939859 | orchestrator | 2025-09-27 22:28:30 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:28:30.939960 | orchestrator | 2025-09-27 22:28:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:28:33.984275 | orchestrator | 2025-09-27 22:28:33 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:28:33.984369 | orchestrator | 2025-09-27 22:28:33 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:28:37.035712 | orchestrator | 2025-09-27 22:28:37 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:28:37.035821 | orchestrator | 2025-09-27 22:28:37 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:28:40.080742 | orchestrator | 2025-09-27 22:28:40 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:28:40.080836 | orchestrator | 2025-09-27 22:28:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:28:43.127417 | orchestrator | 2025-09-27 22:28:43 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:28:43.127526 | orchestrator | 2025-09-27 22:28:43 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:28:46.170089 | orchestrator | 2025-09-27 22:28:46 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:28:46.170223 | orchestrator | 2025-09-27 22:28:46 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:28:49.216186 | orchestrator | 2025-09-27 22:28:49 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:28:49.216290 | orchestrator | 2025-09-27 22:28:49 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:28:52.257972 | orchestrator | 2025-09-27 22:28:52 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:28:52.258154 | orchestrator | 2025-09-27 22:28:52 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:28:55.296572 | orchestrator | 2025-09-27 22:28:55 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:28:55.296656 | orchestrator | 2025-09-27 22:28:55 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:28:58.339563 | orchestrator | 2025-09-27 22:28:58 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:28:58.339674 | orchestrator | 2025-09-27 22:28:58 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:29:01.383809 | orchestrator | 2025-09-27 22:29:01 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:29:01.383931 | orchestrator | 2025-09-27 22:29:01 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:29:04.426328 | orchestrator | 2025-09-27 22:29:04 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:29:04.426442 | orchestrator | 2025-09-27 22:29:04 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:29:07.471161 | orchestrator | 2025-09-27 22:29:07 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:29:07.471291 | orchestrator | 2025-09-27 22:29:07 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:29:10.512148 | orchestrator | 2025-09-27 22:29:10 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:29:10.512256 | orchestrator | 2025-09-27 22:29:10 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:29:13.556184 | orchestrator | 2025-09-27 22:29:13 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:29:13.556288 | orchestrator | 2025-09-27 22:29:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:29:16.603345 | orchestrator | 2025-09-27 22:29:16 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:29:16.603451 | orchestrator | 2025-09-27 22:29:16 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:29:19.648519 | orchestrator | 2025-09-27 22:29:19 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:29:19.648658 | orchestrator | 2025-09-27 22:29:19 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:29:22.688329 | orchestrator | 2025-09-27 22:29:22 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:29:22.688438 | orchestrator | 2025-09-27 22:29:22 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:29:25.731222 | orchestrator | 2025-09-27 22:29:25 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:29:25.731350 | orchestrator | 2025-09-27 22:29:25 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:29:28.769235 | orchestrator | 2025-09-27 22:29:28 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:29:28.769359 | orchestrator | 2025-09-27 22:29:28 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:29:31.806694 | orchestrator | 2025-09-27 22:29:31 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:29:31.806773 | orchestrator | 2025-09-27 22:29:31 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:29:34.850740 | orchestrator | 2025-09-27 22:29:34 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:29:34.850925 | orchestrator | 2025-09-27 22:29:34 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:29:37.901145 | orchestrator | 2025-09-27 22:29:37 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:29:37.901273 | orchestrator | 2025-09-27 22:29:37 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:29:40.949421 | orchestrator | 2025-09-27 22:29:40 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:29:40.949529 | orchestrator | 2025-09-27 22:29:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:29:44.005024 | orchestrator | 2025-09-27 22:29:43 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:29:44.005128 | orchestrator | 2025-09-27 22:29:43 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:29:47.055531 | orchestrator | 2025-09-27 22:29:47 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:29:47.055615 | orchestrator | 2025-09-27 22:29:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:29:50.100234 | orchestrator | 2025-09-27 22:29:50 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:29:50.100323 | orchestrator | 2025-09-27 22:29:50 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:29:53.156299 | orchestrator | 2025-09-27 22:29:53 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:29:53.156415 | orchestrator | 2025-09-27 22:29:53 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:29:56.201586 | orchestrator | 2025-09-27 22:29:56 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:29:56.201718 | orchestrator | 2025-09-27 22:29:56 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:29:59.257840 | orchestrator | 2025-09-27 22:29:59 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:29:59.258165 | orchestrator | 2025-09-27 22:29:59 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:30:02.309722 | orchestrator | 2025-09-27 22:30:02 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:30:02.309804 | orchestrator | 2025-09-27 22:30:02 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:30:05.348149 | orchestrator | 2025-09-27 22:30:05 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:30:05.348261 | orchestrator | 2025-09-27 22:30:05 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:30:08.393441 | orchestrator | 2025-09-27 22:30:08 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:30:08.393526 | orchestrator | 2025-09-27 22:30:08 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:30:11.444247 | orchestrator | 2025-09-27 22:30:11 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:30:11.444336 | orchestrator | 2025-09-27 22:30:11 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:30:14.515930 | orchestrator | 2025-09-27 22:30:14 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:30:14.516082 | orchestrator | 2025-09-27 22:30:14 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:30:17.569106 | orchestrator | 2025-09-27 22:30:17 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:30:17.570526 | orchestrator | 2025-09-27 22:30:17 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:30:20.612515 | orchestrator | 2025-09-27 22:30:20 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:30:20.612644 | orchestrator | 2025-09-27 22:30:20 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:30:23.662598 | orchestrator | 2025-09-27 22:30:23 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:30:23.662691 | orchestrator | 2025-09-27 22:30:23 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:30:26.709933 | orchestrator | 2025-09-27 22:30:26 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:30:26.710150 | orchestrator | 2025-09-27 22:30:26 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:30:29.749905 | orchestrator | 2025-09-27 22:30:29 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:30:29.750178 | orchestrator | 2025-09-27 22:30:29 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:30:32.791405 | orchestrator | 2025-09-27 22:30:32 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:30:32.791499 | orchestrator | 2025-09-27 22:30:32 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:30:35.833823 | orchestrator | 2025-09-27 22:30:35 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:30:35.833988 | orchestrator | 2025-09-27 22:30:35 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:30:38.887589 | orchestrator | 2025-09-27 22:30:38 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:30:38.889098 | orchestrator | 2025-09-27 22:30:38 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:30:41.929390 | orchestrator | 2025-09-27 22:30:41 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:30:41.930738 | orchestrator | 2025-09-27 22:30:41 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:30:44.977091 | orchestrator | 2025-09-27 22:30:44 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:30:44.978779 | orchestrator | 2025-09-27 22:30:44 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:30:48.018686 | orchestrator | 2025-09-27 22:30:48 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:30:48.020319 | orchestrator | 2025-09-27 22:30:48 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:30:51.063131 | orchestrator | 2025-09-27 22:30:51 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:30:51.063367 | orchestrator | 2025-09-27 22:30:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:30:54.110485 | orchestrator | 2025-09-27 22:30:54 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:30:54.110656 | orchestrator | 2025-09-27 22:30:54 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:30:57.151326 | orchestrator | 2025-09-27 22:30:57 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:30:57.151479 | orchestrator | 2025-09-27 22:30:57 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:31:00.195813 | orchestrator | 2025-09-27 22:31:00 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:31:00.195925 | orchestrator | 2025-09-27 22:31:00 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:31:03.237916 | orchestrator | 2025-09-27 22:31:03 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:31:03.238170 | orchestrator | 2025-09-27 22:31:03 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:31:06.283208 | orchestrator | 2025-09-27 22:31:06 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:31:06.283316 | orchestrator | 2025-09-27 22:31:06 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:31:09.328910 | orchestrator | 2025-09-27 22:31:09 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:31:09.329009 | orchestrator | 2025-09-27 22:31:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:31:12.371747 | orchestrator | 2025-09-27 22:31:12 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:31:12.371875 | orchestrator | 2025-09-27 22:31:12 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:31:15.417702 | orchestrator | 2025-09-27 22:31:15 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:31:15.419292 | orchestrator | 2025-09-27 22:31:15 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:31:18.452368 | orchestrator | 2025-09-27 22:31:18 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:31:18.452503 | orchestrator | 2025-09-27 22:31:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:31:21.508233 | orchestrator | 2025-09-27 22:31:21 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:31:21.508336 | orchestrator | 2025-09-27 22:31:21 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:31:24.550967 | orchestrator | 2025-09-27 22:31:24 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:31:24.551105 | orchestrator | 2025-09-27 22:31:24 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:31:27.592791 | orchestrator | 2025-09-27 22:31:27 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:31:27.592896 | orchestrator | 2025-09-27 22:31:27 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:31:30.642533 | orchestrator | 2025-09-27 22:31:30 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:31:30.642658 | orchestrator | 2025-09-27 22:31:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:31:33.686692 | orchestrator | 2025-09-27 22:31:33 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:31:33.686795 | orchestrator | 2025-09-27 22:31:33 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:31:36.731676 | orchestrator | 2025-09-27 22:31:36 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:31:36.731792 | orchestrator | 2025-09-27 22:31:36 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:31:39.782872 | orchestrator | 2025-09-27 22:31:39 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:31:39.782974 | orchestrator | 2025-09-27 22:31:39 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:31:42.831794 | orchestrator | 2025-09-27 22:31:42 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:31:42.831960 | orchestrator | 2025-09-27 22:31:42 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:31:45.875413 | orchestrator | 2025-09-27 22:31:45 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:31:45.875521 | orchestrator | 2025-09-27 22:31:45 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:31:48.922529 | orchestrator | 2025-09-27 22:31:48 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:31:48.922641 | orchestrator | 2025-09-27 22:31:48 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:31:51.969401 | orchestrator | 2025-09-27 22:31:51 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:31:51.969499 | orchestrator | 2025-09-27 22:31:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:31:55.019165 | orchestrator | 2025-09-27 22:31:55 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:31:55.019303 | orchestrator | 2025-09-27 22:31:55 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:31:58.055820 | orchestrator | 2025-09-27 22:31:58 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:31:58.055913 | orchestrator | 2025-09-27 22:31:58 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:32:01.109221 | orchestrator | 2025-09-27 22:32:01 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:32:01.109345 | orchestrator | 2025-09-27 22:32:01 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:32:04.152039 | orchestrator | 2025-09-27 22:32:04 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:32:04.152146 | orchestrator | 2025-09-27 22:32:04 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:32:07.191824 | orchestrator | 2025-09-27 22:32:07 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:32:07.191954 | orchestrator | 2025-09-27 22:32:07 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:32:10.235895 | orchestrator | 2025-09-27 22:32:10 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:32:10.236015 | orchestrator | 2025-09-27 22:32:10 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:32:13.290292 | orchestrator | 2025-09-27 22:32:13 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:32:13.290416 | orchestrator | 2025-09-27 22:32:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:32:16.334246 | orchestrator | 2025-09-27 22:32:16 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:32:16.334387 | orchestrator | 2025-09-27 22:32:16 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:32:19.377835 | orchestrator | 2025-09-27 22:32:19 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:32:19.377946 | orchestrator | 2025-09-27 22:32:19 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:32:22.425993 | orchestrator | 2025-09-27 22:32:22 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:32:22.426146 | orchestrator | 2025-09-27 22:32:22 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:32:25.463973 | orchestrator | 2025-09-27 22:32:25 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state STARTED 2025-09-27 22:32:25.464066 | orchestrator | 2025-09-27 22:32:25 | INFO  | Wait 1 second(s) until the next check 2025-09-27 22:32:28.511688 | orchestrator | 2025-09-27 22:32:28.511799 | orchestrator | 2025-09-27 22:32:28.511815 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 22:32:28.511828 | orchestrator | 2025-09-27 22:32:28.511839 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 22:32:28.511851 | orchestrator | Saturday 27 September 2025 22:21:13 +0000 (0:00:00.260) 0:00:00.260 **** 2025-09-27 22:32:28.511862 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:32:28.511874 | orchestrator | ok: [testbed-node-1] 2025-09-27 22:32:28.511885 | orchestrator | ok: [testbed-node-2] 2025-09-27 22:32:28.511895 | orchestrator | 2025-09-27 22:32:28.511907 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 22:32:28.511918 | orchestrator | Saturday 27 September 2025 22:21:13 +0000 (0:00:00.367) 0:00:00.627 **** 2025-09-27 22:32:28.511929 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-27 22:32:28.511940 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-27 22:32:28.511951 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-27 22:32:28.511962 | orchestrator | 2025-09-27 22:32:28.511973 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-27 22:32:28.511983 | orchestrator | 2025-09-27 22:32:28.511994 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-27 22:32:28.512005 | orchestrator | Saturday 27 September 2025 22:21:14 +0000 (0:00:00.442) 0:00:01.070 **** 2025-09-27 22:32:28.512017 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:32:28.512028 | orchestrator | 2025-09-27 22:32:28.512039 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-27 22:32:28.512050 | orchestrator | Saturday 27 September 2025 22:21:14 +0000 (0:00:00.565) 0:00:01.636 **** 2025-09-27 22:32:28.512062 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-27 22:32:28.512072 | orchestrator | 2025-09-27 22:32:28.512083 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-27 22:32:28.512094 | orchestrator | Saturday 27 September 2025 22:21:18 +0000 (0:00:03.525) 0:00:05.161 **** 2025-09-27 22:32:28.512105 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-27 22:32:28.512116 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-27 22:32:28.512127 | orchestrator | 2025-09-27 22:32:28.512138 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-27 22:32:28.512181 | orchestrator | Saturday 27 September 2025 22:21:24 +0000 (0:00:06.760) 0:00:11.921 **** 2025-09-27 22:32:28.512199 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-27 22:32:28.512211 | orchestrator | 2025-09-27 22:32:28.512224 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-27 22:32:28.512236 | orchestrator | Saturday 27 September 2025 22:21:28 +0000 (0:00:03.399) 0:00:15.321 **** 2025-09-27 22:32:28.512249 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-27 22:32:28.512261 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-27 22:32:28.512273 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-27 22:32:28.512286 | orchestrator | 2025-09-27 22:32:28.512298 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-27 22:32:28.512311 | orchestrator | Saturday 27 September 2025 22:21:36 +0000 (0:00:08.127) 0:00:23.449 **** 2025-09-27 22:32:28.512323 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-27 22:32:28.512335 | orchestrator | 2025-09-27 22:32:28.512376 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-27 22:32:28.512389 | orchestrator | Saturday 27 September 2025 22:21:39 +0000 (0:00:03.199) 0:00:26.648 **** 2025-09-27 22:32:28.512417 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-27 22:32:28.512430 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-27 22:32:28.512441 | orchestrator | 2025-09-27 22:32:28.512455 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-27 22:32:28.512467 | orchestrator | Saturday 27 September 2025 22:21:47 +0000 (0:00:07.640) 0:00:34.289 **** 2025-09-27 22:32:28.512479 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-27 22:32:28.512491 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-27 22:32:28.512503 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-27 22:32:28.512515 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-27 22:32:28.512527 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-27 22:32:28.512539 | orchestrator | 2025-09-27 22:32:28.512551 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-27 22:32:28.512563 | orchestrator | Saturday 27 September 2025 22:22:04 +0000 (0:00:16.695) 0:00:50.984 **** 2025-09-27 22:32:28.512574 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 22:32:28.512584 | orchestrator | 2025-09-27 22:32:28.512595 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-27 22:32:28.512606 | orchestrator | Saturday 27 September 2025 22:22:04 +0000 (0:00:00.568) 0:00:51.553 **** 2025-09-27 22:32:28.512616 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:32:28.512627 | orchestrator | 2025-09-27 22:32:28.512637 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-09-27 22:32:28.512648 | orchestrator | Saturday 27 September 2025 22:22:10 +0000 (0:00:05.492) 0:00:57.045 **** 2025-09-27 22:32:28.512659 | orchestrator | changed: [testbed-node-0] 2025-09-27 22:32:28.512669 | orchestrator | 2025-09-27 22:32:28.512680 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-27 22:32:28.512710 | orchestrator | Saturday 27 September 2025 22:22:16 +0000 (0:00:06.160) 0:01:03.206 **** 2025-09-27 22:32:28.512722 | orchestrator | ok: [testbed-node-0] 2025-09-27 22:32:28.512732 | orchestrator | 2025-09-27 22:32:28.512743 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-09-27 22:32:28.512754 | orchestrator | Saturday 27 September 2025 22:22:19 +0000 (0:00:03.584) 0:01:06.790 **** 2025-09-27 22:32:28.512764 | orchestrator | 2025-09-27 22:32:28.512775 | orchestrator | STILL ALIVE [task 'octavia : Create security groups for octavia' is running] *** 2025-09-27 22:32:28.512786 | orchestrator | 2025-09-27 22:32:28.512805 | orchestrator | STILL ALIVE [task 'octavia : Create security groups for octavia' is running] *** 2025-09-27 22:32:28.512816 | orchestrator | 2025-09-27 22:32:28.512827 | orchestrator | STILL ALIVE [task 'octavia : Create security groups for octavia' is running] *** 2025-09-27 22:32:28.512837 | orchestrator | 2025-09-27 22:32:28.512848 | orchestrator | STILL ALIVE [task 'octavia : Create security groups for octavia' is running] *** 2025-09-27 22:32:28.512859 | orchestrator | 2025-09-27 22:32:28.512869 | orchestrator | STILL ALIVE [task 'octavia : Create security groups for octavia' is running] *** 2025-09-27 22:32:28.512880 | orchestrator | 2025-09-27 22:32:28.512890 | orchestrator | STILL ALIVE [task 'octavia : Create security groups for octavia' is running] *** 2025-09-27 22:32:28.512905 | orchestrator | failed: [testbed-node-0] (item=lb-mgmt-sec-grp) => {"action": "os_security_group", "ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": "504 Gateway Time-out: The server didn't respond in time.", "response": "

504 Gateway Time-out

\nThe server didn't respond in time.\n\n"}, "item": {"enabled": true, "name": "lb-mgmt-sec-grp", "rules": [{"protocol": "icmp"}, {"dst_port": 22, "protocol": "tcp", "src_port": 22}, {"dst_port": "9443", "protocol": "tcp", "src_port": "9443"}]}, "msg": "HttpException: 504: Server Error for url: https://api-int.testbed.osism.xyz:9696/v2.0/security-groups/lb-mgmt-sec-grp, 504 Gateway Time-out: The server didn't respond in time."} 2025-09-27 22:32:28.512919 | orchestrator | 2025-09-27 22:32:28.512930 | orchestrator | STILL ALIVE [task 'octavia : Create security groups for octavia' is running] *** 2025-09-27 22:32:28.512941 | orchestrator | 2025-09-27 22:32:28.512952 | orchestrator | STILL ALIVE [task 'octavia : Create security groups for octavia' is running] *** 2025-09-27 22:32:28.512962 | orchestrator | 2025-09-27 22:32:28.512973 | orchestrator | STILL ALIVE [task 'octavia : Create security groups for octavia' is running] *** 2025-09-27 22:32:28.512984 | orchestrator | 2025-09-27 22:32:28.512994 | orchestrator | STILL ALIVE [task 'octavia : Create security groups for octavia' is running] *** 2025-09-27 22:32:28.513005 | orchestrator | 2025-09-27 22:32:28.513031 | orchestrator | STILL ALIVE [task 'octavia : Create security groups for octavia' is running] *** 2025-09-27 22:32:28.513041 | orchestrator | 2025-09-27 22:32:28.513052 | orchestrator | STILL ALIVE [task 'octavia : Create security groups for octavia' is running] *** 2025-09-27 22:32:28.513063 | orchestrator | 2025-09-27 22:32:28.513074 | orchestrator | STILL ALIVE [task 'octavia : Create security groups for octavia' is running] *** 2025-09-27 22:32:28.513085 | orchestrator | 2025-09-27 22:32:28.513096 | orchestrator | STILL ALIVE [task 'octavia : Create security groups for octavia' is running] *** 2025-09-27 22:32:28.513106 | orchestrator | 2025-09-27 22:32:28.513117 | orchestrator | STILL ALIVE [task 'octavia : Create security groups for octavia' is running] *** 2025-09-27 22:32:28.513128 | orchestrator | 2025-09-27 22:32:28.513139 | orchestrator | STILL ALIVE [task 'octavia : Create security groups for octavia' is running] *** 2025-09-27 22:32:28.513156 | orchestrator | failed: [testbed-node-0] (item=lb-health-mgr-sec-grp) => {"action": "os_security_group", "ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": "504 Gateway Time-out: The server didn't respond in time.", "response": "

504 Gateway Time-out

\nThe server didn't respond in time.\n\n"}, "item": {"enabled": true, "name": "lb-health-mgr-sec-grp", "rules": [{"dst_port": "5555", "protocol": "udp", "src_port": "5555"}]}, "msg": "HttpException: 504: Server Error for url: https://api-int.testbed.osism.xyz:9696/v2.0/security-groups/lb-health-mgr-sec-grp, 504 Gateway Time-out: The server didn't respond in time."} 2025-09-27 22:32:28.513168 | orchestrator | 2025-09-27 22:32:28.513179 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:32:28.513213 | orchestrator | testbed-node-0 : ok=14  changed=7  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-09-27 22:32:28.513226 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:32:28.513237 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 22:32:28.513255 | orchestrator | 2025-09-27 22:32:28.513266 | orchestrator | 2025-09-27 22:32:28.513277 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:32:28.513288 | orchestrator | Saturday 27 September 2025 22:32:26 +0000 (0:10:06.468) 0:11:13.259 **** 2025-09-27 22:32:28.513298 | orchestrator | =============================================================================== 2025-09-27 22:32:28.513309 | orchestrator | octavia : Create security groups for octavia -------------------------- 606.47s 2025-09-27 22:32:28.513320 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.70s 2025-09-27 22:32:28.513330 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.13s 2025-09-27 22:32:28.513356 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.64s 2025-09-27 22:32:28.513374 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.76s 2025-09-27 22:32:28.513385 | orchestrator | octavia : Create nova keypair for amphora ------------------------------- 6.16s 2025-09-27 22:32:28.513396 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.49s 2025-09-27 22:32:28.513411 | orchestrator | octavia : Get service project id ---------------------------------------- 3.58s 2025-09-27 22:32:28.513422 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.53s 2025-09-27 22:32:28.513433 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.40s 2025-09-27 22:32:28.513443 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.20s 2025-09-27 22:32:28.513454 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.57s 2025-09-27 22:32:28.513465 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.57s 2025-09-27 22:32:28.513475 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-09-27 22:32:28.513486 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2025-09-27 22:32:28.513496 | orchestrator | 2025-09-27 22:32:28 | INFO  | Task c0ec4e28-e51e-4233-a73b-fb984cb331bc is in state SUCCESS 2025-09-27 22:32:28.513508 | orchestrator | 2025-09-27 22:32:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:32:31.546478 | orchestrator | 2025-09-27 22:32:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:32:34.580926 | orchestrator | 2025-09-27 22:32:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:32:37.631522 | orchestrator | 2025-09-27 22:32:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:32:40.671485 | orchestrator | 2025-09-27 22:32:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:32:43.713217 | orchestrator | 2025-09-27 22:32:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:32:46.757357 | orchestrator | 2025-09-27 22:32:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:32:49.798973 | orchestrator | 2025-09-27 22:32:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:32:52.843270 | orchestrator | 2025-09-27 22:32:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:32:55.885789 | orchestrator | 2025-09-27 22:32:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:32:58.924333 | orchestrator | 2025-09-27 22:32:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:33:01.962879 | orchestrator | 2025-09-27 22:33:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:33:05.009109 | orchestrator | 2025-09-27 22:33:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:33:08.049724 | orchestrator | 2025-09-27 22:33:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:33:11.088807 | orchestrator | 2025-09-27 22:33:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:33:14.128995 | orchestrator | 2025-09-27 22:33:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:33:17.171292 | orchestrator | 2025-09-27 22:33:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:33:20.209863 | orchestrator | 2025-09-27 22:33:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:33:23.253265 | orchestrator | 2025-09-27 22:33:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:33:26.292483 | orchestrator | 2025-09-27 22:33:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:33:29.332625 | orchestrator | 2025-09-27 22:33:29.638871 | orchestrator | 2025-09-27 22:33:29.646830 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Sep 27 22:33:29 UTC 2025 2025-09-27 22:33:29.646928 | orchestrator | 2025-09-27 22:33:30.082178 | orchestrator | ok: Runtime: 0:38:35.655414 2025-09-27 22:33:30.345762 | 2025-09-27 22:33:30.345983 | TASK [Bootstrap services] 2025-09-27 22:33:31.175782 | orchestrator | 2025-09-27 22:33:31.175949 | orchestrator | # BOOTSTRAP 2025-09-27 22:33:31.175965 | orchestrator | 2025-09-27 22:33:31.175972 | orchestrator | + set -e 2025-09-27 22:33:31.175979 | orchestrator | + echo 2025-09-27 22:33:31.175987 | orchestrator | + echo '# BOOTSTRAP' 2025-09-27 22:33:31.175998 | orchestrator | + echo 2025-09-27 22:33:31.176031 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-27 22:33:31.184716 | orchestrator | + set -e 2025-09-27 22:33:31.184805 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-27 22:33:35.402845 | orchestrator | 2025-09-27 22:33:35 | INFO  | It takes a moment until task a60bceee-90de-44e2-ad08-e793ab0c6e1b (flavor-manager) has been started and output is visible here. 2025-09-27 22:33:43.042585 | orchestrator | 2025-09-27 22:33:38 | INFO  | Flavor SCS-1L-1 created 2025-09-27 22:33:43.042726 | orchestrator | 2025-09-27 22:33:38 | INFO  | Flavor SCS-1L-1-5 created 2025-09-27 22:33:43.042744 | orchestrator | 2025-09-27 22:33:38 | INFO  | Flavor SCS-1V-2 created 2025-09-27 22:33:43.042756 | orchestrator | 2025-09-27 22:33:38 | INFO  | Flavor SCS-1V-2-5 created 2025-09-27 22:33:43.042768 | orchestrator | 2025-09-27 22:33:39 | INFO  | Flavor SCS-1V-4 created 2025-09-27 22:33:43.042779 | orchestrator | 2025-09-27 22:33:39 | INFO  | Flavor SCS-1V-4-10 created 2025-09-27 22:33:43.042790 | orchestrator | 2025-09-27 22:33:39 | INFO  | Flavor SCS-1V-8 created 2025-09-27 22:33:43.042803 | orchestrator | 2025-09-27 22:33:39 | INFO  | Flavor SCS-1V-8-20 created 2025-09-27 22:33:43.042827 | orchestrator | 2025-09-27 22:33:39 | INFO  | Flavor SCS-2V-4 created 2025-09-27 22:33:43.042839 | orchestrator | 2025-09-27 22:33:39 | INFO  | Flavor SCS-2V-4-10 created 2025-09-27 22:33:43.042850 | orchestrator | 2025-09-27 22:33:39 | INFO  | Flavor SCS-2V-8 created 2025-09-27 22:33:43.042861 | orchestrator | 2025-09-27 22:33:40 | INFO  | Flavor SCS-2V-8-20 created 2025-09-27 22:33:43.042872 | orchestrator | 2025-09-27 22:33:40 | INFO  | Flavor SCS-2V-16 created 2025-09-27 22:33:43.042883 | orchestrator | 2025-09-27 22:33:40 | INFO  | Flavor SCS-2V-16-50 created 2025-09-27 22:33:43.042894 | orchestrator | 2025-09-27 22:33:40 | INFO  | Flavor SCS-4V-8 created 2025-09-27 22:33:43.042905 | orchestrator | 2025-09-27 22:33:40 | INFO  | Flavor SCS-4V-8-20 created 2025-09-27 22:33:43.042916 | orchestrator | 2025-09-27 22:33:41 | INFO  | Flavor SCS-4V-16 created 2025-09-27 22:33:43.042927 | orchestrator | 2025-09-27 22:33:41 | INFO  | Flavor SCS-4V-16-50 created 2025-09-27 22:33:43.042938 | orchestrator | 2025-09-27 22:33:41 | INFO  | Flavor SCS-4V-32 created 2025-09-27 22:33:43.042949 | orchestrator | 2025-09-27 22:33:41 | INFO  | Flavor SCS-4V-32-100 created 2025-09-27 22:33:43.042960 | orchestrator | 2025-09-27 22:33:41 | INFO  | Flavor SCS-8V-16 created 2025-09-27 22:33:43.042971 | orchestrator | 2025-09-27 22:33:41 | INFO  | Flavor SCS-8V-16-50 created 2025-09-27 22:33:43.042983 | orchestrator | 2025-09-27 22:33:41 | INFO  | Flavor SCS-8V-32 created 2025-09-27 22:33:43.042994 | orchestrator | 2025-09-27 22:33:42 | INFO  | Flavor SCS-8V-32-100 created 2025-09-27 22:33:43.043024 | orchestrator | 2025-09-27 22:33:42 | INFO  | Flavor SCS-16V-32 created 2025-09-27 22:33:43.043047 | orchestrator | 2025-09-27 22:33:42 | INFO  | Flavor SCS-16V-32-100 created 2025-09-27 22:33:43.043058 | orchestrator | 2025-09-27 22:33:42 | INFO  | Flavor SCS-2V-4-20s created 2025-09-27 22:33:43.043069 | orchestrator | 2025-09-27 22:33:42 | INFO  | Flavor SCS-4V-8-50s created 2025-09-27 22:33:43.043080 | orchestrator | 2025-09-27 22:33:42 | INFO  | Flavor SCS-8V-32-100s created 2025-09-27 22:33:45.334671 | orchestrator | 2025-09-27 22:33:45 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-09-27 22:33:55.412465 | orchestrator | 2025-09-27 22:33:55 | INFO  | Task 36bcb26c-b68b-48c2-a6aa-ecd777ae7b31 (bootstrap-basic) was prepared for execution. 2025-09-27 22:33:55.412547 | orchestrator | 2025-09-27 22:33:55 | INFO  | It takes a moment until task 36bcb26c-b68b-48c2-a6aa-ecd777ae7b31 (bootstrap-basic) has been started and output is visible here. 2025-09-27 22:39:33.917883 | orchestrator | 2025-09-27 22:39:33 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-09-27 22:39:33.917979 | orchestrator | 2025-09-27 22:39:33 | INFO  | Task a644d842-0d23-4fa5-a9b9-a6305619368e (bootstrap-basic) was prepared for execution. 2025-09-27 22:39:33.918286 | orchestrator | 2025-09-27 22:39:33 | INFO  | It takes a moment until task a644d842-0d23-4fa5-a9b9-a6305619368e (bootstrap-basic) has been started and output is visible here. 2025-09-27 22:44:54.146077 | orchestrator | 2025-09-27 22:44:54.146205 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-09-27 22:44:54.146224 | orchestrator | 2025-09-27 22:44:54.146238 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-27 22:44:54.146250 | orchestrator | Saturday 27 September 2025 22:33:59 +0000 (0:00:00.055) 0:00:00.055 **** 2025-09-27 22:44:54.146261 | orchestrator | ok: [localhost] 2025-09-27 22:44:54.146273 | orchestrator | 2025-09-27 22:44:54.146284 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-09-27 22:44:54.146296 | orchestrator | Saturday 27 September 2025 22:34:00 +0000 (0:00:01.611) 0:00:01.667 **** 2025-09-27 22:44:54.146307 | orchestrator | ok: [localhost] 2025-09-27 22:44:54.146319 | orchestrator | 2025-09-27 22:44:54.146330 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-09-27 22:44:54.146342 | orchestrator | Saturday 27 September 2025 22:34:08 +0000 (0:00:07.755) 0:00:09.422 **** 2025-09-27 22:44:54.146353 | orchestrator | changed: [localhost] 2025-09-27 22:44:54.146365 | orchestrator | 2025-09-27 22:44:54.146376 | orchestrator | TASK [Get volume type local] *************************************************** 2025-09-27 22:44:54.146387 | orchestrator | Saturday 27 September 2025 22:34:16 +0000 (0:00:07.934) 0:00:17.357 **** 2025-09-27 22:44:54.146398 | orchestrator | ok: [localhost] 2025-09-27 22:44:54.146412 | orchestrator | 2025-09-27 22:44:54.146424 | orchestrator | TASK [Create volume type local] ************************************************ 2025-09-27 22:44:54.146436 | orchestrator | Saturday 27 September 2025 22:34:24 +0000 (0:00:07.459) 0:00:24.817 **** 2025-09-27 22:44:54.146489 | orchestrator | changed: [localhost] 2025-09-27 22:44:54.146499 | orchestrator | 2025-09-27 22:44:54.146509 | orchestrator | TASK [Create public network] *************************************************** 2025-09-27 22:44:54.146519 | orchestrator | Saturday 27 September 2025 22:34:30 +0000 (0:00:06.348) 0:00:31.166 **** 2025-09-27 22:44:54.146529 | orchestrator | 2025-09-27 22:44:54.146549 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:44:54.146560 | orchestrator | 2025-09-27 22:44:54.146570 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:44:54.146580 | orchestrator | 2025-09-27 22:44:54.146591 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:44:54.146600 | orchestrator | 2025-09-27 22:44:54.146611 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:44:54.146622 | orchestrator | 2025-09-27 22:44:54.146631 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:44:54.146642 | orchestrator | 2025-09-27 22:44:54.146653 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:44:54.146673 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "extra_data": {"data": null, "details": "504 Gateway Time-out: The server didn't respond in time.", "response": "

504 Gateway Time-out

\nThe server didn't respond in time.\n\n"}, "msg": "HttpException: 504: Server Error for url: https://api.testbed.osism.xyz:9696/v2.0/networks/public, 504 Gateway Time-out: The server didn't respond in time."} 2025-09-27 22:44:54.146718 | orchestrator | 2025-09-27 22:44:54.146730 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:44:54.146743 | orchestrator | localhost : ok=5  changed=2  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-09-27 22:44:54.146755 | orchestrator | 2025-09-27 22:44:54.146767 | orchestrator | 2025-09-27 22:44:54.146778 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:44:54.146789 | orchestrator | Saturday 27 September 2025 22:39:33 +0000 (0:05:03.302) 0:05:34.469 **** 2025-09-27 22:44:54.146797 | orchestrator | =============================================================================== 2025-09-27 22:44:54.146805 | orchestrator | Create public network ------------------------------------------------- 303.30s 2025-09-27 22:44:54.146812 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.93s 2025-09-27 22:44:54.146820 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.76s 2025-09-27 22:44:54.146830 | orchestrator | Get volume type local --------------------------------------------------- 7.46s 2025-09-27 22:44:54.146841 | orchestrator | Create volume type local ------------------------------------------------ 6.35s 2025-09-27 22:44:54.146852 | orchestrator | Gathering Facts --------------------------------------------------------- 1.61s 2025-09-27 22:44:54.146862 | orchestrator | 2025-09-27 22:44:54.146873 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-09-27 22:44:54.146884 | orchestrator | 2025-09-27 22:44:54.146894 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-09-27 22:44:54.146906 | orchestrator | Saturday 27 September 2025 22:39:37 +0000 (0:00:00.057) 0:00:00.057 **** 2025-09-27 22:44:54.146917 | orchestrator | ok: [localhost] 2025-09-27 22:44:54.146927 | orchestrator | 2025-09-27 22:44:54.146937 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-09-27 22:44:54.146944 | orchestrator | Saturday 27 September 2025 22:39:43 +0000 (0:00:06.283) 0:00:06.341 **** 2025-09-27 22:44:54.146950 | orchestrator | skipping: [localhost] 2025-09-27 22:44:54.146957 | orchestrator | 2025-09-27 22:44:54.146964 | orchestrator | TASK [Get volume type local] *************************************************** 2025-09-27 22:44:54.146970 | orchestrator | Saturday 27 September 2025 22:39:44 +0000 (0:00:00.052) 0:00:06.393 **** 2025-09-27 22:44:54.146977 | orchestrator | ok: [localhost] 2025-09-27 22:44:54.146983 | orchestrator | 2025-09-27 22:44:54.146990 | orchestrator | TASK [Create volume type local] ************************************************ 2025-09-27 22:44:54.146996 | orchestrator | Saturday 27 September 2025 22:39:50 +0000 (0:00:06.354) 0:00:12.747 **** 2025-09-27 22:44:54.147003 | orchestrator | skipping: [localhost] 2025-09-27 22:44:54.147010 | orchestrator | 2025-09-27 22:44:54.147016 | orchestrator | TASK [Create public network] *************************************************** 2025-09-27 22:44:54.147042 | orchestrator | Saturday 27 September 2025 22:39:50 +0000 (0:00:00.057) 0:00:12.805 **** 2025-09-27 22:44:54.147049 | orchestrator | 2025-09-27 22:44:54.147055 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:44:54.147062 | orchestrator | 2025-09-27 22:44:54.147069 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:44:54.147075 | orchestrator | 2025-09-27 22:44:54.147083 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:44:54.147094 | orchestrator | 2025-09-27 22:44:54.147105 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:44:54.147117 | orchestrator | 2025-09-27 22:44:54.147128 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:44:54.147139 | orchestrator | 2025-09-27 22:44:54.147149 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:44:54.147167 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "extra_data": {"data": null, "details": "504 Gateway Time-out: The server didn't respond in time.", "response": "

504 Gateway Time-out

\nThe server didn't respond in time.\n\n"}, "msg": "HttpException: 504: Server Error for url: https://api.testbed.osism.xyz:9696/v2.0/networks/public, 504 Gateway Time-out: The server didn't respond in time."} 2025-09-27 22:44:54.147191 | orchestrator | 2025-09-27 22:44:54.147201 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:44:54.147212 | orchestrator | localhost : ok=2  changed=0 unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2025-09-27 22:44:54.147222 | orchestrator | 2025-09-27 22:44:54.147231 | orchestrator | 2025-09-27 22:44:54.147242 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:44:54.147252 | orchestrator | Saturday 27 September 2025 22:44:53 +0000 (0:05:03.486) 0:05:16.291 **** 2025-09-27 22:44:54.147263 | orchestrator | =============================================================================== 2025-09-27 22:44:54.147275 | orchestrator | Create public network ------------------------------------------------- 303.49s 2025-09-27 22:44:54.147286 | orchestrator | Get volume type local --------------------------------------------------- 6.35s 2025-09-27 22:44:54.147297 | orchestrator | Get volume type LUKS ---------------------------------------------------- 6.28s 2025-09-27 22:44:54.147307 | orchestrator | Create volume type local ------------------------------------------------ 0.06s 2025-09-27 22:44:54.147317 | orchestrator | Create volume type LUKS ------------------------------------------------- 0.05s 2025-09-27 22:44:54.924034 | orchestrator | ERROR 2025-09-27 22:44:54.924593 | orchestrator | { 2025-09-27 22:44:54.924716 | orchestrator | "delta": "0:11:23.683883", 2025-09-27 22:44:54.924788 | orchestrator | "end": "2025-09-27 22:44:54.421679", 2025-09-27 22:44:54.924848 | orchestrator | "msg": "non-zero return code", 2025-09-27 22:44:54.924905 | orchestrator | "rc": 2, 2025-09-27 22:44:54.924961 | orchestrator | "start": "2025-09-27 22:33:30.737796" 2025-09-27 22:44:54.925016 | orchestrator | } failure 2025-09-27 22:44:54.943640 | 2025-09-27 22:44:54.943980 | PLAY RECAP 2025-09-27 22:44:54.944142 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-09-27 22:44:54.944210 | 2025-09-27 22:44:55.222637 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-27 22:44:55.224078 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-27 22:44:55.975624 | 2025-09-27 22:44:55.975809 | PLAY [Post output play] 2025-09-27 22:44:55.992304 | 2025-09-27 22:44:55.992438 | LOOP [stage-output : Register sources] 2025-09-27 22:44:56.061554 | 2025-09-27 22:44:56.061904 | TASK [stage-output : Check sudo] 2025-09-27 22:44:56.921290 | orchestrator | sudo: a password is required 2025-09-27 22:44:57.109758 | orchestrator | ok: Runtime: 0:00:00.015386 2025-09-27 22:44:57.123381 | 2025-09-27 22:44:57.123540 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-27 22:44:57.159229 | 2025-09-27 22:44:57.159510 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-27 22:44:57.227917 | orchestrator | ok 2025-09-27 22:44:57.236733 | 2025-09-27 22:44:57.236860 | LOOP [stage-output : Ensure target folders exist] 2025-09-27 22:44:57.674176 | orchestrator | ok: "docs" 2025-09-27 22:44:57.674432 | 2025-09-27 22:44:57.950294 | orchestrator | ok: "artifacts" 2025-09-27 22:44:58.194048 | orchestrator | ok: "logs" 2025-09-27 22:44:58.216687 | 2025-09-27 22:44:58.216908 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-27 22:44:58.255999 | 2025-09-27 22:44:58.256323 | TASK [stage-output : Make all log files readable] 2025-09-27 22:44:58.542875 | orchestrator | ok 2025-09-27 22:44:58.551324 | 2025-09-27 22:44:58.551441 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-27 22:44:58.585912 | orchestrator | skipping: Conditional result was False 2025-09-27 22:44:58.601847 | 2025-09-27 22:44:58.601991 | TASK [stage-output : Discover log files for compression] 2025-09-27 22:44:58.626089 | orchestrator | skipping: Conditional result was False 2025-09-27 22:44:58.641381 | 2025-09-27 22:44:58.641527 | LOOP [stage-output : Archive everything from logs] 2025-09-27 22:44:58.686492 | 2025-09-27 22:44:58.686729 | PLAY [Post cleanup play] 2025-09-27 22:44:58.697957 | 2025-09-27 22:44:58.698059 | TASK [Set cloud fact (Zuul deployment)] 2025-09-27 22:44:58.746660 | orchestrator | ok 2025-09-27 22:44:58.754718 | 2025-09-27 22:44:58.754816 | TASK [Set cloud fact (local deployment)] 2025-09-27 22:44:58.787887 | orchestrator | skipping: Conditional result was False 2025-09-27 22:44:58.798388 | 2025-09-27 22:44:58.798503 | TASK [Clean the cloud environment] 2025-09-27 22:44:59.454315 | orchestrator | 2025-09-27 22:44:59 - clean up servers 2025-09-27 22:45:00.212347 | orchestrator | 2025-09-27 22:45:00 - testbed-manager 2025-09-27 22:45:00.303512 | orchestrator | 2025-09-27 22:45:00 - testbed-node-1 2025-09-27 22:45:00.400141 | orchestrator | 2025-09-27 22:45:00 - testbed-node-0 2025-09-27 22:45:00.492066 | orchestrator | 2025-09-27 22:45:00 - testbed-node-4 2025-09-27 22:45:00.590856 | orchestrator | 2025-09-27 22:45:00 - testbed-node-2 2025-09-27 22:45:00.682059 | orchestrator | 2025-09-27 22:45:00 - testbed-node-5 2025-09-27 22:45:00.774375 | orchestrator | 2025-09-27 22:45:00 - testbed-node-3 2025-09-27 22:45:00.861380 | orchestrator | 2025-09-27 22:45:00 - clean up keypairs 2025-09-27 22:45:00.882791 | orchestrator | 2025-09-27 22:45:00 - testbed 2025-09-27 22:45:00.907201 | orchestrator | 2025-09-27 22:45:00 - wait for servers to be gone 2025-09-27 22:45:09.879240 | orchestrator | 2025-09-27 22:45:09 - clean up ports 2025-09-27 22:45:10.084481 | orchestrator | 2025-09-27 22:45:10 - 3ab11452-e44e-4d87-ac34-219edf05d9a3 2025-09-27 22:45:10.540975 | orchestrator | 2025-09-27 22:45:10 - 45d70bef-2579-4baf-b866-efb22ec1a7f5 2025-09-27 22:45:10.824692 | orchestrator | 2025-09-27 22:45:10 - 6c5ba333-db1a-4b8f-85a7-1d5c756ff0fa 2025-09-27 22:45:11.098951 | orchestrator | 2025-09-27 22:45:11 - 87233fd6-6ebe-4ce0-b580-d0dc8f326d47 2025-09-27 22:45:11.325142 | orchestrator | 2025-09-27 22:45:11 - 9583c250-012b-49fb-9bac-e5452a35290a 2025-09-27 22:45:11.542718 | orchestrator | 2025-09-27 22:45:11 - ab13fab9-1204-46b2-8a83-377e0bfea1a6 2025-09-27 22:45:11.759787 | orchestrator | 2025-09-27 22:45:11 - b3c59a4d-4bdb-46cb-bad2-a9f8c1d49c78 2025-09-27 22:45:11.981342 | orchestrator | 2025-09-27 22:45:11 - clean up volumes 2025-09-27 22:45:12.111137 | orchestrator | 2025-09-27 22:45:12 - testbed-volume-3-node-base 2025-09-27 22:45:12.149747 | orchestrator | 2025-09-27 22:45:12 - testbed-volume-4-node-base 2025-09-27 22:45:12.187019 | orchestrator | 2025-09-27 22:45:12 - testbed-volume-1-node-base 2025-09-27 22:45:12.234333 | orchestrator | 2025-09-27 22:45:12 - testbed-volume-2-node-base 2025-09-27 22:45:12.272644 | orchestrator | 2025-09-27 22:45:12 - testbed-volume-0-node-base 2025-09-27 22:45:12.309835 | orchestrator | 2025-09-27 22:45:12 - testbed-volume-5-node-base 2025-09-27 22:45:12.349785 | orchestrator | 2025-09-27 22:45:12 - testbed-volume-manager-base 2025-09-27 22:45:12.392402 | orchestrator | 2025-09-27 22:45:12 - testbed-volume-1-node-4 2025-09-27 22:45:12.434261 | orchestrator | 2025-09-27 22:45:12 - testbed-volume-3-node-3 2025-09-27 22:45:12.474518 | orchestrator | 2025-09-27 22:45:12 - testbed-volume-6-node-3 2025-09-27 22:45:12.515078 | orchestrator | 2025-09-27 22:45:12 - testbed-volume-2-node-5 2025-09-27 22:45:12.555739 | orchestrator | 2025-09-27 22:45:12 - testbed-volume-8-node-5 2025-09-27 22:45:12.594712 | orchestrator | 2025-09-27 22:45:12 - testbed-volume-4-node-4 2025-09-27 22:45:12.642687 | orchestrator | 2025-09-27 22:45:12 - testbed-volume-5-node-5 2025-09-27 22:45:12.690518 | orchestrator | 2025-09-27 22:45:12 - testbed-volume-7-node-4 2025-09-27 22:45:12.732207 | orchestrator | 2025-09-27 22:45:12 - testbed-volume-0-node-3 2025-09-27 22:45:12.780485 | orchestrator | 2025-09-27 22:45:12 - disconnect routers 2025-09-27 22:45:12.857831 | orchestrator | 2025-09-27 22:45:12 - testbed 2025-09-27 22:45:13.938332 | orchestrator | 2025-09-27 22:45:13 - clean up subnets 2025-09-27 22:45:13.976411 | orchestrator | 2025-09-27 22:45:13 - subnet-testbed-management 2025-09-27 22:45:14.141306 | orchestrator | 2025-09-27 22:45:14 - clean up networks 2025-09-27 22:45:14.870345 | orchestrator | 2025-09-27 22:45:14 - net-testbed-management 2025-09-27 22:45:15.158721 | orchestrator | 2025-09-27 22:45:15 - clean up security groups 2025-09-27 22:45:15.207855 | orchestrator | 2025-09-27 22:45:15 - testbed-management 2025-09-27 22:45:15.329696 | orchestrator | 2025-09-27 22:45:15 - testbed-node 2025-09-27 22:45:15.442855 | orchestrator | 2025-09-27 22:45:15 - clean up floating ips 2025-09-27 22:45:15.471353 | orchestrator | 2025-09-27 22:45:15 - 81.163.193.173 2025-09-27 22:45:15.818446 | orchestrator | 2025-09-27 22:45:15 - clean up routers 2025-09-27 22:45:15.876345 | orchestrator | 2025-09-27 22:45:15 - testbed 2025-09-27 22:45:16.856137 | orchestrator | ok: Runtime: 0:00:17.590452 2025-09-27 22:45:16.858364 | 2025-09-27 22:45:16.858465 | PLAY RECAP 2025-09-27 22:45:16.858533 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-27 22:45:16.858586 | 2025-09-27 22:45:16.986719 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-27 22:45:16.989085 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-27 22:45:17.739952 | 2025-09-27 22:45:17.740110 | PLAY [Cleanup play] 2025-09-27 22:45:17.757196 | 2025-09-27 22:45:17.757321 | TASK [Set cloud fact (Zuul deployment)] 2025-09-27 22:45:17.812605 | orchestrator | ok 2025-09-27 22:45:17.823239 | 2025-09-27 22:45:17.823404 | TASK [Set cloud fact (local deployment)] 2025-09-27 22:45:17.849719 | orchestrator | skipping: Conditional result was False 2025-09-27 22:45:17.860317 | 2025-09-27 22:45:17.860443 | TASK [Clean the cloud environment] 2025-09-27 22:45:18.992244 | orchestrator | 2025-09-27 22:45:18 - clean up servers 2025-09-27 22:45:19.585521 | orchestrator | 2025-09-27 22:45:19 - clean up keypairs 2025-09-27 22:45:19.600888 | orchestrator | 2025-09-27 22:45:19 - wait for servers to be gone 2025-09-27 22:45:19.639563 | orchestrator | 2025-09-27 22:45:19 - clean up ports 2025-09-27 22:45:19.709419 | orchestrator | 2025-09-27 22:45:19 - clean up volumes 2025-09-27 22:45:19.767259 | orchestrator | 2025-09-27 22:45:19 - disconnect routers 2025-09-27 22:45:19.788408 | orchestrator | 2025-09-27 22:45:19 - clean up subnets 2025-09-27 22:45:19.810316 | orchestrator | 2025-09-27 22:45:19 - clean up networks 2025-09-27 22:45:19.931550 | orchestrator | 2025-09-27 22:45:19 - clean up security groups 2025-09-27 22:45:19.966165 | orchestrator | 2025-09-27 22:45:19 - clean up floating ips 2025-09-27 22:45:19.989222 | orchestrator | 2025-09-27 22:45:19 - clean up routers 2025-09-27 22:45:20.400058 | orchestrator | ok: Runtime: 0:00:01.367642 2025-09-27 22:45:20.403987 | 2025-09-27 22:45:20.404116 | PLAY RECAP 2025-09-27 22:45:20.404215 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-27 22:45:20.404262 | 2025-09-27 22:45:20.555051 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-27 22:45:20.557390 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-27 22:45:21.332891 | 2025-09-27 22:45:21.333083 | PLAY [Base post-fetch] 2025-09-27 22:45:21.350359 | 2025-09-27 22:45:21.350496 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-27 22:45:21.417122 | orchestrator | skipping: Conditional result was False 2025-09-27 22:45:21.432476 | 2025-09-27 22:45:21.432740 | TASK [fetch-output : Set log path for single node] 2025-09-27 22:45:21.481637 | orchestrator | ok 2025-09-27 22:45:21.490293 | 2025-09-27 22:45:21.490425 | LOOP [fetch-output : Ensure local output dirs] 2025-09-27 22:45:22.023450 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/58989b4fd94645e9af60764394f17cd1/work/logs" 2025-09-27 22:45:22.329860 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/58989b4fd94645e9af60764394f17cd1/work/artifacts" 2025-09-27 22:45:22.603651 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/58989b4fd94645e9af60764394f17cd1/work/docs" 2025-09-27 22:45:22.632649 | 2025-09-27 22:45:22.632815 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-27 22:45:23.542204 | orchestrator | changed: .d..t...... ./ 2025-09-27 22:45:23.542507 | orchestrator | changed: All items complete 2025-09-27 22:45:23.542572 | 2025-09-27 22:45:24.224500 | orchestrator | changed: .d..t...... ./ 2025-09-27 22:45:24.958373 | orchestrator | changed: .d..t...... ./ 2025-09-27 22:45:24.982718 | 2025-09-27 22:45:24.982893 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-27 22:45:25.022242 | orchestrator | skipping: Conditional result was False 2025-09-27 22:45:25.025073 | orchestrator | skipping: Conditional result was False 2025-09-27 22:45:25.041113 | 2025-09-27 22:45:25.041222 | PLAY RECAP 2025-09-27 22:45:25.041309 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-27 22:45:25.041352 | 2025-09-27 22:45:25.171629 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-27 22:45:25.172628 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-27 22:45:25.898398 | 2025-09-27 22:45:25.898579 | PLAY [Base post] 2025-09-27 22:45:25.913028 | 2025-09-27 22:45:25.913157 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-27 22:45:27.208833 | orchestrator | changed 2025-09-27 22:45:27.219593 | 2025-09-27 22:45:27.219734 | PLAY RECAP 2025-09-27 22:45:27.219818 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-27 22:45:27.219898 | 2025-09-27 22:45:27.351054 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-27 22:45:27.352079 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-27 22:45:28.174113 | 2025-09-27 22:45:28.174337 | PLAY [Base post-logs] 2025-09-27 22:45:28.187016 | 2025-09-27 22:45:28.187184 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-27 22:45:28.666534 | localhost | changed 2025-09-27 22:45:28.685768 | 2025-09-27 22:45:28.685949 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-27 22:45:28.724932 | localhost | ok 2025-09-27 22:45:28.730712 | 2025-09-27 22:45:28.730900 | TASK [Set zuul-log-path fact] 2025-09-27 22:45:28.748936 | localhost | ok 2025-09-27 22:45:28.760886 | 2025-09-27 22:45:28.761012 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-27 22:45:28.787605 | localhost | ok 2025-09-27 22:45:28.790728 | 2025-09-27 22:45:28.790851 | TASK [upload-logs : Create log directories] 2025-09-27 22:45:29.297955 | localhost | changed 2025-09-27 22:45:29.303584 | 2025-09-27 22:45:29.303745 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-27 22:45:29.785230 | localhost -> localhost | ok: Runtime: 0:00:00.007659 2025-09-27 22:45:29.790141 | 2025-09-27 22:45:29.790272 | TASK [upload-logs : Upload logs to log server] 2025-09-27 22:45:30.347465 | localhost | Output suppressed because no_log was given 2025-09-27 22:45:30.351326 | 2025-09-27 22:45:30.351505 | LOOP [upload-logs : Compress console log and json output] 2025-09-27 22:45:30.399491 | localhost | skipping: Conditional result was False 2025-09-27 22:45:30.403833 | localhost | skipping: Conditional result was False 2025-09-27 22:45:30.412038 | 2025-09-27 22:45:30.412249 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-27 22:45:30.457171 | localhost | skipping: Conditional result was False 2025-09-27 22:45:30.457838 | 2025-09-27 22:45:30.461083 | localhost | skipping: Conditional result was False 2025-09-27 22:45:30.473962 | 2025-09-27 22:45:30.474175 | LOOP [upload-logs : Upload console log and json output]