2025-10-09 09:35:52.711094 | Job console starting 2025-10-09 09:35:52.725349 | Updating git repos 2025-10-09 09:35:52.819131 | Cloning repos into workspace 2025-10-09 09:35:53.091404 | Restoring repo states 2025-10-09 09:35:53.114105 | Merging changes 2025-10-09 09:35:53.114124 | Checking out repos 2025-10-09 09:35:53.615644 | Preparing playbooks 2025-10-09 09:35:54.316106 | Running Ansible setup 2025-10-09 09:35:59.311908 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-10-09 09:36:00.024894 | 2025-10-09 09:36:00.025725 | PLAY [Base pre] 2025-10-09 09:36:00.043178 | 2025-10-09 09:36:00.043311 | TASK [Setup log path fact] 2025-10-09 09:36:00.074135 | orchestrator | ok 2025-10-09 09:36:00.093359 | 2025-10-09 09:36:00.093519 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-10-09 09:36:00.133319 | orchestrator | ok 2025-10-09 09:36:00.146039 | 2025-10-09 09:36:00.146161 | TASK [emit-job-header : Print job information] 2025-10-09 09:36:00.199121 | # Job Information 2025-10-09 09:36:00.199348 | Ansible Version: 2.16.14 2025-10-09 09:36:00.199393 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-10-09 09:36:00.199438 | Pipeline: post 2025-10-09 09:36:00.199469 | Executor: 521e9411259a 2025-10-09 09:36:00.199497 | Triggered by: https://github.com/osism/testbed/commit/cbb11947ce2601da075b0f7ece9477f2cf45eae4 2025-10-09 09:36:00.199526 | Event ID: 5385c752-a4f3-11f0-8ad9-0c0a6becd823 2025-10-09 09:36:00.207464 | 2025-10-09 09:36:00.207611 | LOOP [emit-job-header : Print node information] 2025-10-09 09:36:00.350348 | orchestrator | ok: 2025-10-09 09:36:00.350583 | orchestrator | # Node Information 2025-10-09 09:36:00.350625 | orchestrator | Inventory Hostname: orchestrator 2025-10-09 09:36:00.350650 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-10-09 09:36:00.350672 | orchestrator | Username: zuul-testbed06 2025-10-09 09:36:00.350692 | orchestrator | Distro: Debian 12.12 2025-10-09 09:36:00.350715 | orchestrator | Provider: static-testbed 2025-10-09 09:36:00.350736 | orchestrator | Region: 2025-10-09 09:36:00.350757 | orchestrator | Label: testbed-orchestrator 2025-10-09 09:36:00.350777 | orchestrator | Product Name: OpenStack Nova 2025-10-09 09:36:00.350796 | orchestrator | Interface IP: 81.163.193.140 2025-10-09 09:36:00.370098 | 2025-10-09 09:36:00.370228 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-10-09 09:36:00.830307 | orchestrator -> localhost | changed 2025-10-09 09:36:00.839582 | 2025-10-09 09:36:00.839707 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-10-09 09:36:01.843310 | orchestrator -> localhost | changed 2025-10-09 09:36:01.869539 | 2025-10-09 09:36:01.869733 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-10-09 09:36:02.146849 | orchestrator -> localhost | ok 2025-10-09 09:36:02.154081 | 2025-10-09 09:36:02.154273 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-10-09 09:36:02.183252 | orchestrator | ok 2025-10-09 09:36:02.199148 | orchestrator | included: /var/lib/zuul/builds/8acddd485d59423196f19f4b453180c3/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-10-09 09:36:02.208760 | 2025-10-09 09:36:02.208857 | TASK [add-build-sshkey : Create Temp SSH key] 2025-10-09 09:36:03.232560 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-10-09 09:36:03.232809 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/8acddd485d59423196f19f4b453180c3/work/8acddd485d59423196f19f4b453180c3_id_rsa 2025-10-09 09:36:03.232848 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/8acddd485d59423196f19f4b453180c3/work/8acddd485d59423196f19f4b453180c3_id_rsa.pub 2025-10-09 09:36:03.232873 | orchestrator -> localhost | The key fingerprint is: 2025-10-09 09:36:03.232897 | orchestrator -> localhost | SHA256:zcuMaN/5jJLzK2XzZ2dSaZ+VuVo8RyyzJDwVsd8/W7E zuul-build-sshkey 2025-10-09 09:36:03.232919 | orchestrator -> localhost | The key's randomart image is: 2025-10-09 09:36:03.232951 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-10-09 09:36:03.232973 | orchestrator -> localhost | | o. | 2025-10-09 09:36:03.232994 | orchestrator -> localhost | | o | 2025-10-09 09:36:03.233014 | orchestrator -> localhost | | o | 2025-10-09 09:36:03.233033 | orchestrator -> localhost | | o . . o.| 2025-10-09 09:36:03.233052 | orchestrator -> localhost | | S o + +.O| 2025-10-09 09:36:03.233076 | orchestrator -> localhost | | . ++. +.X*| 2025-10-09 09:36:03.233096 | orchestrator -> localhost | | o .++o oEO| 2025-10-09 09:36:03.233116 | orchestrator -> localhost | | . .=. +. =oX| 2025-10-09 09:36:03.233136 | orchestrator -> localhost | | .=*oo+.= | 2025-10-09 09:36:03.233156 | orchestrator -> localhost | +----[SHA256]-----+ 2025-10-09 09:36:03.233213 | orchestrator -> localhost | ok: Runtime: 0:00:00.553830 2025-10-09 09:36:03.240840 | 2025-10-09 09:36:03.240955 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-10-09 09:36:03.269682 | orchestrator | ok 2025-10-09 09:36:03.279609 | orchestrator | included: /var/lib/zuul/builds/8acddd485d59423196f19f4b453180c3/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-10-09 09:36:03.288544 | 2025-10-09 09:36:03.288681 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-10-09 09:36:03.311817 | orchestrator | skipping: Conditional result was False 2025-10-09 09:36:03.319327 | 2025-10-09 09:36:03.319427 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-10-09 09:36:03.895709 | orchestrator | changed 2025-10-09 09:36:03.902176 | 2025-10-09 09:36:03.902285 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-10-09 09:36:04.185257 | orchestrator | ok 2025-10-09 09:36:04.194718 | 2025-10-09 09:36:04.194874 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-10-09 09:36:04.610114 | orchestrator | ok 2025-10-09 09:36:04.616268 | 2025-10-09 09:36:04.616385 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-10-09 09:36:05.027011 | orchestrator | ok 2025-10-09 09:36:05.036410 | 2025-10-09 09:36:05.036537 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-10-09 09:36:05.070868 | orchestrator | skipping: Conditional result was False 2025-10-09 09:36:05.086473 | 2025-10-09 09:36:05.086646 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-10-09 09:36:05.536850 | orchestrator -> localhost | changed 2025-10-09 09:36:05.555507 | 2025-10-09 09:36:05.555714 | TASK [add-build-sshkey : Add back temp key] 2025-10-09 09:36:05.884027 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/8acddd485d59423196f19f4b453180c3/work/8acddd485d59423196f19f4b453180c3_id_rsa (zuul-build-sshkey) 2025-10-09 09:36:05.884605 | orchestrator -> localhost | ok: Runtime: 0:00:00.011604 2025-10-09 09:36:05.900280 | 2025-10-09 09:36:05.900449 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-10-09 09:36:06.332130 | orchestrator | ok 2025-10-09 09:36:06.340341 | 2025-10-09 09:36:06.340480 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-10-09 09:36:06.375606 | orchestrator | skipping: Conditional result was False 2025-10-09 09:36:06.435277 | 2025-10-09 09:36:06.435397 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-10-09 09:36:06.824877 | orchestrator | ok 2025-10-09 09:36:06.838343 | 2025-10-09 09:36:06.838465 | TASK [validate-host : Define zuul_info_dir fact] 2025-10-09 09:36:06.879922 | orchestrator | ok 2025-10-09 09:36:06.888327 | 2025-10-09 09:36:06.888435 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-10-09 09:36:07.215487 | orchestrator -> localhost | ok 2025-10-09 09:36:07.223295 | 2025-10-09 09:36:07.223423 | TASK [validate-host : Collect information about the host] 2025-10-09 09:36:08.407065 | orchestrator | ok 2025-10-09 09:36:08.423613 | 2025-10-09 09:36:08.423743 | TASK [validate-host : Sanitize hostname] 2025-10-09 09:36:08.490463 | orchestrator | ok 2025-10-09 09:36:08.500450 | 2025-10-09 09:36:08.500631 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-10-09 09:36:09.062292 | orchestrator -> localhost | changed 2025-10-09 09:36:09.069249 | 2025-10-09 09:36:09.069380 | TASK [validate-host : Collect information about zuul worker] 2025-10-09 09:36:09.492720 | orchestrator | ok 2025-10-09 09:36:09.498493 | 2025-10-09 09:36:09.498626 | TASK [validate-host : Write out all zuul information for each host] 2025-10-09 09:36:10.030781 | orchestrator -> localhost | changed 2025-10-09 09:36:10.049645 | 2025-10-09 09:36:10.049815 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-10-09 09:36:10.350451 | orchestrator | ok 2025-10-09 09:36:10.362577 | 2025-10-09 09:36:10.362724 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-10-09 09:36:48.845730 | orchestrator | changed: 2025-10-09 09:36:48.847228 | orchestrator | .d..t...... src/ 2025-10-09 09:36:48.847318 | orchestrator | .d..t...... src/github.com/ 2025-10-09 09:36:48.847355 | orchestrator | .d..t...... src/github.com/osism/ 2025-10-09 09:36:48.847387 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-10-09 09:36:48.847417 | orchestrator | RedHat.yml 2025-10-09 09:36:48.862790 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-10-09 09:36:48.862807 | orchestrator | RedHat.yml 2025-10-09 09:36:48.862882 | orchestrator | = 2.2.0"... 2025-10-09 09:36:59.917992 | orchestrator | 09:36:59.917 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-10-09 09:36:59.942145 | orchestrator | 09:36:59.941 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-10-09 09:37:00.437759 | orchestrator | 09:37:00.436 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-10-09 09:37:01.371918 | orchestrator | 09:37:01.371 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-10-09 09:37:01.440300 | orchestrator | 09:37:01.440 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-10-09 09:37:02.100072 | orchestrator | 09:37:02.099 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-10-09 09:37:02.185865 | orchestrator | 09:37:02.185 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-10-09 09:37:02.847735 | orchestrator | 09:37:02.847 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-10-09 09:37:02.847810 | orchestrator | 09:37:02.847 STDOUT terraform: Providers are signed by their developers. 2025-10-09 09:37:02.847820 | orchestrator | 09:37:02.847 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-10-09 09:37:02.847827 | orchestrator | 09:37:02.847 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-10-09 09:37:02.847836 | orchestrator | 09:37:02.847 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-10-09 09:37:02.847908 | orchestrator | 09:37:02.847 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-10-09 09:37:02.847951 | orchestrator | 09:37:02.847 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-10-09 09:37:02.847962 | orchestrator | 09:37:02.847 STDOUT terraform: you run "tofu init" in the future. 2025-10-09 09:37:02.848025 | orchestrator | 09:37:02.847 STDOUT terraform: OpenTofu has been successfully initialized! 2025-10-09 09:37:02.848110 | orchestrator | 09:37:02.848 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-10-09 09:37:02.848154 | orchestrator | 09:37:02.848 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-10-09 09:37:02.848181 | orchestrator | 09:37:02.848 STDOUT terraform: should now work. 2025-10-09 09:37:02.848235 | orchestrator | 09:37:02.848 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-10-09 09:37:02.848286 | orchestrator | 09:37:02.848 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-10-09 09:37:02.848333 | orchestrator | 09:37:02.848 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-10-09 09:37:03.152735 | orchestrator | 09:37:03.152 STDOUT terraform: Created and switched to workspace "ci"! 2025-10-09 09:37:03.152787 | orchestrator | 09:37:03.152 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-10-09 09:37:03.152809 | orchestrator | 09:37:03.152 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-10-09 09:37:03.152817 | orchestrator | 09:37:03.152 STDOUT terraform: for this configuration. 2025-10-09 09:37:03.401146 | orchestrator | 09:37:03.400 STDOUT terraform: ci.auto.tfvars 2025-10-09 09:37:03.402947 | orchestrator | 09:37:03.402 STDOUT terraform: default_custom.tf 2025-10-09 09:37:04.411362 | orchestrator | 09:37:04.411 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-10-09 09:37:04.941871 | orchestrator | 09:37:04.941 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-10-09 09:37:05.252538 | orchestrator | 09:37:05.250 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-10-09 09:37:05.252605 | orchestrator | 09:37:05.250 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-10-09 09:37:05.252612 | orchestrator | 09:37:05.250 STDOUT terraform:  + create 2025-10-09 09:37:05.252618 | orchestrator | 09:37:05.250 STDOUT terraform:  <= read (data resources) 2025-10-09 09:37:05.252623 | orchestrator | 09:37:05.250 STDOUT terraform: OpenTofu will perform the following actions: 2025-10-09 09:37:05.252629 | orchestrator | 09:37:05.250 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-10-09 09:37:05.252638 | orchestrator | 09:37:05.250 STDOUT terraform:  # (config refers to values not yet known) 2025-10-09 09:37:05.252643 | orchestrator | 09:37:05.250 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-10-09 09:37:05.252647 | orchestrator | 09:37:05.250 STDOUT terraform:  + checksum = (known after apply) 2025-10-09 09:37:05.252651 | orchestrator | 09:37:05.250 STDOUT terraform:  + created_at = (known after apply) 2025-10-09 09:37:05.252656 | orchestrator | 09:37:05.250 STDOUT terraform:  + file = (known after apply) 2025-10-09 09:37:05.252660 | orchestrator | 09:37:05.250 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.252664 | orchestrator | 09:37:05.250 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:05.252668 | orchestrator | 09:37:05.250 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-10-09 09:37:05.252673 | orchestrator | 09:37:05.250 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-10-09 09:37:05.252677 | orchestrator | 09:37:05.250 STDOUT terraform:  + most_recent = true 2025-10-09 09:37:05.252681 | orchestrator | 09:37:05.250 STDOUT terraform:  + name = (known after apply) 2025-10-09 09:37:05.252685 | orchestrator | 09:37:05.250 STDOUT terraform:  + protected = (known after apply) 2025-10-09 09:37:05.252689 | orchestrator | 09:37:05.250 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.252694 | orchestrator | 09:37:05.250 STDOUT terraform:  + schema = (known after apply) 2025-10-09 09:37:05.252698 | orchestrator | 09:37:05.250 STDOUT terraform:  + size_bytes = (known after apply) 2025-10-09 09:37:05.252702 | orchestrator | 09:37:05.250 STDOUT terraform:  + tags = (known after apply) 2025-10-09 09:37:05.252706 | orchestrator | 09:37:05.250 STDOUT terraform:  + updated_at = (known after apply) 2025-10-09 09:37:05.252723 | orchestrator | 09:37:05.250 STDOUT terraform:  } 2025-10-09 09:37:05.252727 | orchestrator | 09:37:05.250 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-10-09 09:37:05.252732 | orchestrator | 09:37:05.251 STDOUT terraform:  # (config refers to values not yet known) 2025-10-09 09:37:05.252736 | orchestrator | 09:37:05.251 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-10-09 09:37:05.252740 | orchestrator | 09:37:05.251 STDOUT terraform:  + checksum = (known after apply) 2025-10-09 09:37:05.252745 | orchestrator | 09:37:05.251 STDOUT terraform:  + created_at = (known after apply) 2025-10-09 09:37:05.252749 | orchestrator | 09:37:05.251 STDOUT terraform:  + file = (known after apply) 2025-10-09 09:37:05.252756 | orchestrator | 09:37:05.251 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.252760 | orchestrator | 09:37:05.251 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:05.252765 | orchestrator | 09:37:05.251 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-10-09 09:37:05.252769 | orchestrator | 09:37:05.251 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-10-09 09:37:05.252773 | orchestrator | 09:37:05.251 STDOUT terraform:  + most_recent = true 2025-10-09 09:37:05.252777 | orchestrator | 09:37:05.251 STDOUT terraform:  + name = (known after apply) 2025-10-09 09:37:05.252781 | orchestrator | 09:37:05.251 STDOUT terraform:  + protected = (known after apply) 2025-10-09 09:37:05.252785 | orchestrator | 09:37:05.251 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.252801 | orchestrator | 09:37:05.251 STDOUT terraform:  + schema = (known after apply) 2025-10-09 09:37:05.252805 | orchestrator | 09:37:05.251 STDOUT terraform:  + size_bytes = (known after apply) 2025-10-09 09:37:05.252809 | orchestrator | 09:37:05.251 STDOUT terraform:  + tags = (known after apply) 2025-10-09 09:37:05.252814 | orchestrator | 09:37:05.251 STDOUT terraform:  + updated_at = (known after apply) 2025-10-09 09:37:05.252818 | orchestrator | 09:37:05.251 STDOUT terraform:  } 2025-10-09 09:37:05.252822 | orchestrator | 09:37:05.251 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-10-09 09:37:05.252826 | orchestrator | 09:37:05.251 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-10-09 09:37:05.252838 | orchestrator | 09:37:05.251 STDOUT terraform:  + content = (known after apply) 2025-10-09 09:37:05.252845 | orchestrator | 09:37:05.251 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-10-09 09:37:05.252849 | orchestrator | 09:37:05.251 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-10-09 09:37:05.252853 | orchestrator | 09:37:05.251 STDOUT terraform:  + content_md5 = (known after apply) 2025-10-09 09:37:05.252857 | orchestrator | 09:37:05.251 STDOUT terraform:  + content_sha1 = (known after apply) 2025-10-09 09:37:05.252862 | orchestrator | 09:37:05.251 STDOUT terraform:  + content_sha256 = (known after apply) 2025-10-09 09:37:05.252866 | orchestrator | 09:37:05.251 STDOUT terraform:  + content_sha512 = (known after apply) 2025-10-09 09:37:05.252870 | orchestrator | 09:37:05.251 STDOUT terraform:  + directory_permission = "0777" 2025-10-09 09:37:05.252878 | orchestrator | 09:37:05.251 STDOUT terraform:  + file_permission = "0644" 2025-10-09 09:37:05.252882 | orchestrator | 09:37:05.252 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-10-09 09:37:05.252886 | orchestrator | 09:37:05.252 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.252891 | orchestrator | 09:37:05.252 STDOUT terraform:  } 2025-10-09 09:37:05.252895 | orchestrator | 09:37:05.252 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-10-09 09:37:05.252900 | orchestrator | 09:37:05.252 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-10-09 09:37:05.252904 | orchestrator | 09:37:05.252 STDOUT terraform:  + content = (known after apply) 2025-10-09 09:37:05.252908 | orchestrator | 09:37:05.252 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-10-09 09:37:05.252912 | orchestrator | 09:37:05.252 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-10-09 09:37:05.252916 | orchestrator | 09:37:05.252 STDOUT terraform:  + content_md5 = (known after apply) 2025-10-09 09:37:05.252920 | orchestrator | 09:37:05.252 STDOUT terraform:  + content_sha1 = (known after apply) 2025-10-09 09:37:05.252927 | orchestrator | 09:37:05.252 STDOUT terraform:  + content_sha256 = (known after apply) 2025-10-09 09:37:05.252931 | orchestrator | 09:37:05.252 STDOUT terraform:  + content_sha512 = (known after apply) 2025-10-09 09:37:05.252935 | orchestrator | 09:37:05.252 STDOUT terraform:  + directory_permission = "0777" 2025-10-09 09:37:05.252939 | orchestrator | 09:37:05.252 STDOUT terraform:  + file_permission = "0644" 2025-10-09 09:37:05.252944 | orchestrator | 09:37:05.252 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-10-09 09:37:05.252948 | orchestrator | 09:37:05.252 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.252954 | orchestrator | 09:37:05.252 STDOUT terraform:  } 2025-10-09 09:37:05.252958 | orchestrator | 09:37:05.252 STDOUT terraform:  # local_file.inventory will be created 2025-10-09 09:37:05.253102 | orchestrator | 09:37:05.252 STDOUT terraform:  + resource "local_file" "inventory" { 2025-10-09 09:37:05.253109 | orchestrator | 09:37:05.252 STDOUT terraform:  + content = (known after apply) 2025-10-09 09:37:05.253113 | orchestrator | 09:37:05.253 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-10-09 09:37:05.253119 | orchestrator | 09:37:05.253 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-10-09 09:37:05.253259 | orchestrator | 09:37:05.253 STDOUT terraform:  + content_md5 = (known after apply) 2025-10-09 09:37:05.253264 | orchestrator | 09:37:05.253 STDOUT terraform:  + content_sha1 = (known after apply) 2025-10-09 09:37:05.253268 | orchestrator | 09:37:05.253 STDOUT terraform:  + content_sha256 = (known after apply) 2025-10-09 09:37:05.253272 | orchestrator | 09:37:05.253 STDOUT terraform:  + content_sha512 = (known after apply) 2025-10-09 09:37:05.253276 | orchestrator | 09:37:05.253 STDOUT terraform:  + directory_permission = "0777" 2025-10-09 09:37:05.253282 | orchestrator | 09:37:05.253 STDOUT terraform:  + file_permission = "0644" 2025-10-09 09:37:05.253413 | orchestrator | 09:37:05.253 STDOUT terraform:  + filename = "inventory.ci" 2025-10-09 09:37:05.253425 | orchestrator | 09:37:05.253 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.253429 | orchestrator | 09:37:05.253 STDOUT terraform:  } 2025-10-09 09:37:05.253433 | orchestrator | 09:37:05.253 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-10-09 09:37:05.253437 | orchestrator | 09:37:05.253 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-10-09 09:37:05.253443 | orchestrator | 09:37:05.253 STDOUT terraform:  + content = (sensitive value) 2025-10-09 09:37:05.253449 | orchestrator | 09:37:05.253 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-10-09 09:37:05.253565 | orchestrator | 09:37:05.253 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-10-09 09:37:05.253570 | orchestrator | 09:37:05.253 STDOUT terraform:  + content_md5 = (known after apply) 2025-10-09 09:37:05.253574 | orchestrator | 09:37:05.253 STDOUT terraform:  + content_sha1 = (known after apply) 2025-10-09 09:37:05.253580 | orchestrator | 09:37:05.253 STDOUT terraform:  + content_sha256 = (known after apply) 2025-10-09 09:37:05.253722 | orchestrator | 09:37:05.253 STDOUT terraform:  + content_sha512 = (known after apply) 2025-10-09 09:37:05.253728 | orchestrator | 09:37:05.253 STDOUT terraform:  + directory_permission = "0700" 2025-10-09 09:37:05.253732 | orchestrator | 09:37:05.253 STDOUT terraform:  + file_permission = "0600" 2025-10-09 09:37:05.253736 | orchestrator | 09:37:05.253 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-10-09 09:37:05.253740 | orchestrator | 09:37:05.253 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.253744 | orchestrator | 09:37:05.253 STDOUT terraform:  } 2025-10-09 09:37:05.253750 | orchestrator | 09:37:05.253 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-10-09 09:37:05.253756 | orchestrator | 09:37:05.253 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-10-09 09:37:05.253882 | orchestrator | 09:37:05.253 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.253887 | orchestrator | 09:37:05.253 STDOUT terraform:  } 2025-10-09 09:37:05.253891 | orchestrator | 09:37:05.253 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-10-09 09:37:05.253895 | orchestrator | 09:37:05.253 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-10-09 09:37:05.253901 | orchestrator | 09:37:05.253 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:05.254053 | orchestrator | 09:37:05.253 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.254064 | orchestrator | 09:37:05.253 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.254069 | orchestrator | 09:37:05.253 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:05.254112 | orchestrator | 09:37:05.253 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:05.262084 | orchestrator | 09:37:05.254 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-10-09 09:37:05.262102 | orchestrator | 09:37:05.262 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.262127 | orchestrator | 09:37:05.262 STDOUT terraform:  + size = 80 2025-10-09 09:37:05.262147 | orchestrator | 09:37:05.262 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:05.262172 | orchestrator | 09:37:05.262 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:05.262178 | orchestrator | 09:37:05.262 STDOUT terraform:  } 2025-10-09 09:37:05.262387 | orchestrator | 09:37:05.262 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-10-09 09:37:05.262473 | orchestrator | 09:37:05.262 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-10-09 09:37:05.262489 | orchestrator | 09:37:05.262 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:05.262524 | orchestrator | 09:37:05.262 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.262537 | orchestrator | 09:37:05.262 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.262549 | orchestrator | 09:37:05.262 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:05.262561 | orchestrator | 09:37:05.262 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:05.262573 | orchestrator | 09:37:05.262 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-10-09 09:37:05.262585 | orchestrator | 09:37:05.262 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.262597 | orchestrator | 09:37:05.262 STDOUT terraform:  + size = 80 2025-10-09 09:37:05.262609 | orchestrator | 09:37:05.262 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:05.262625 | orchestrator | 09:37:05.262 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:05.262638 | orchestrator | 09:37:05.262 STDOUT terraform:  } 2025-10-09 09:37:05.262650 | orchestrator | 09:37:05.262 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-10-09 09:37:05.262662 | orchestrator | 09:37:05.262 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-10-09 09:37:05.262678 | orchestrator | 09:37:05.262 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:05.262690 | orchestrator | 09:37:05.262 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.262705 | orchestrator | 09:37:05.262 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.262745 | orchestrator | 09:37:05.262 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:05.262777 | orchestrator | 09:37:05.262 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:05.262818 | orchestrator | 09:37:05.262 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-10-09 09:37:05.262853 | orchestrator | 09:37:05.262 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.262869 | orchestrator | 09:37:05.262 STDOUT terraform:  + size = 80 2025-10-09 09:37:05.262885 | orchestrator | 09:37:05.262 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:05.262900 | orchestrator | 09:37:05.262 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:05.262916 | orchestrator | 09:37:05.262 STDOUT terraform:  } 2025-10-09 09:37:05.262969 | orchestrator | 09:37:05.262 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-10-09 09:37:05.263065 | orchestrator | 09:37:05.262 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-10-09 09:37:05.263080 | orchestrator | 09:37:05.262 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:05.263092 | orchestrator | 09:37:05.263 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.263107 | orchestrator | 09:37:05.263 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.263123 | orchestrator | 09:37:05.263 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:05.263161 | orchestrator | 09:37:05.263 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:05.263217 | orchestrator | 09:37:05.263 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-10-09 09:37:05.263235 | orchestrator | 09:37:05.263 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.263250 | orchestrator | 09:37:05.263 STDOUT terraform:  + size = 80 2025-10-09 09:37:05.263266 | orchestrator | 09:37:05.263 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:05.263282 | orchestrator | 09:37:05.263 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:05.263297 | orchestrator | 09:37:05.263 STDOUT terraform:  } 2025-10-09 09:37:05.263340 | orchestrator | 09:37:05.263 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-10-09 09:37:05.263383 | orchestrator | 09:37:05.263 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-10-09 09:37:05.263408 | orchestrator | 09:37:05.263 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:05.263424 | orchestrator | 09:37:05.263 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.263455 | orchestrator | 09:37:05.263 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.263500 | orchestrator | 09:37:05.263 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:05.263516 | orchestrator | 09:37:05.263 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:05.263569 | orchestrator | 09:37:05.263 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-10-09 09:37:05.263587 | orchestrator | 09:37:05.263 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.263602 | orchestrator | 09:37:05.263 STDOUT terraform:  + size = 80 2025-10-09 09:37:05.263618 | orchestrator | 09:37:05.263 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:05.263646 | orchestrator | 09:37:05.263 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:05.263662 | orchestrator | 09:37:05.263 STDOUT terraform:  } 2025-10-09 09:37:05.263703 | orchestrator | 09:37:05.263 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-10-09 09:37:05.263746 | orchestrator | 09:37:05.263 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-10-09 09:37:05.263763 | orchestrator | 09:37:05.263 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:05.263788 | orchestrator | 09:37:05.263 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.263816 | orchestrator | 09:37:05.263 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.263861 | orchestrator | 09:37:05.263 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:05.263878 | orchestrator | 09:37:05.263 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:05.263934 | orchestrator | 09:37:05.263 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-10-09 09:37:05.263952 | orchestrator | 09:37:05.263 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.263967 | orchestrator | 09:37:05.263 STDOUT terraform:  + size = 80 2025-10-09 09:37:05.263982 | orchestrator | 09:37:05.263 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:05.264036 | orchestrator | 09:37:05.263 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:05.264050 | orchestrator | 09:37:05.264 STDOUT terraform:  } 2025-10-09 09:37:05.264066 | orchestrator | 09:37:05.264 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-10-09 09:37:05.264134 | orchestrator | 09:37:05.264 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-10-09 09:37:05.264154 | orchestrator | 09:37:05.264 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:05.264169 | orchestrator | 09:37:05.264 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.264202 | orchestrator | 09:37:05.264 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.264229 | orchestrator | 09:37:05.264 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:05.264274 | orchestrator | 09:37:05.264 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:05.264326 | orchestrator | 09:37:05.264 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-10-09 09:37:05.264343 | orchestrator | 09:37:05.264 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.264359 | orchestrator | 09:37:05.264 STDOUT terraform:  + size = 80 2025-10-09 09:37:05.264374 | orchestrator | 09:37:05.264 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:05.264389 | orchestrator | 09:37:05.264 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:05.264405 | orchestrator | 09:37:05.264 STDOUT terraform:  } 2025-10-09 09:37:05.264444 | orchestrator | 09:37:05.264 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-10-09 09:37:05.264484 | orchestrator | 09:37:05.264 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-09 09:37:05.264501 | orchestrator | 09:37:05.264 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:05.264550 | orchestrator | 09:37:05.264 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.264567 | orchestrator | 09:37:05.264 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.264583 | orchestrator | 09:37:05.264 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:05.264771 | orchestrator | 09:37:05.264 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-10-09 09:37:05.264886 | orchestrator | 09:37:05.264 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.264973 | orchestrator | 09:37:05.264 STDOUT terraform:  + size = 20 2025-10-09 09:37:05.265198 | orchestrator | 09:37:05.264 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:05.265316 | orchestrator | 09:37:05.265 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:05.265470 | orchestrator | 09:37:05.265 STDOUT terraform:  } 2025-10-09 09:37:05.265850 | orchestrator | 09:37:05.265 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-10-09 09:37:05.266282 | orchestrator | 09:37:05.265 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-09 09:37:05.267189 | orchestrator | 09:37:05.266 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:05.268991 | orchestrator | 09:37:05.267 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.269414 | orchestrator | 09:37:05.269 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.269461 | orchestrator | 09:37:05.269 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:05.269476 | orchestrator | 09:37:05.269 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-10-09 09:37:05.269492 | orchestrator | 09:37:05.269 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.269504 | orchestrator | 09:37:05.269 STDOUT terraform:  + size = 20 2025-10-09 09:37:05.269520 | orchestrator | 09:37:05.269 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:05.269533 | orchestrator | 09:37:05.269 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:05.269548 | orchestrator | 09:37:05.269 STDOUT terraform:  } 2025-10-09 09:37:05.269581 | orchestrator | 09:37:05.269 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-10-09 09:37:05.269634 | orchestrator | 09:37:05.269 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-09 09:37:05.269657 | orchestrator | 09:37:05.269 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:05.269672 | orchestrator | 09:37:05.269 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.269714 | orchestrator | 09:37:05.269 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.269973 | orchestrator | 09:37:05.269 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:05.270071 | orchestrator | 09:37:05.269 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-10-09 09:37:05.270271 | orchestrator | 09:37:05.270 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.270291 | orchestrator | 09:37:05.270 STDOUT terraform:  + size = 20 2025-10-09 09:37:05.270308 | orchestrator | 09:37:05.270 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:05.270323 | orchestrator | 09:37:05.270 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:05.270339 | orchestrator | 09:37:05.270 STDOUT terraform:  } 2025-10-09 09:37:05.270446 | orchestrator | 09:37:05.270 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-10-09 09:37:05.270483 | orchestrator | 09:37:05.270 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-09 09:37:05.270580 | orchestrator | 09:37:05.270 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:05.270599 | orchestrator | 09:37:05.270 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.270630 | orchestrator | 09:37:05.270 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.270673 | orchestrator | 09:37:05.270 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:05.270969 | orchestrator | 09:37:05.270 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-10-09 09:37:05.271029 | orchestrator | 09:37:05.270 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.271048 | orchestrator | 09:37:05.270 STDOUT terraform:  + size = 20 2025-10-09 09:37:05.271060 | orchestrator | 09:37:05.271 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:05.271076 | orchestrator | 09:37:05.271 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:05.271088 | orchestrator | 09:37:05.271 STDOUT terraform:  } 2025-10-09 09:37:05.271144 | orchestrator | 09:37:05.271 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-10-09 09:37:05.271162 | orchestrator | 09:37:05.271 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-09 09:37:05.271251 | orchestrator | 09:37:05.271 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:05.271269 | orchestrator | 09:37:05.271 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.271311 | orchestrator | 09:37:05.271 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.271624 | orchestrator | 09:37:05.271 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:05.271671 | orchestrator | 09:37:05.271 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-10-09 09:37:05.271925 | orchestrator | 09:37:05.271 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.271945 | orchestrator | 09:37:05.271 STDOUT terraform:  + size = 20 2025-10-09 09:37:05.271960 | orchestrator | 09:37:05.271 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:05.271976 | orchestrator | 09:37:05.271 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:05.271991 | orchestrator | 09:37:05.271 STDOUT terraform:  } 2025-10-09 09:37:05.272108 | orchestrator | 09:37:05.271 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-10-09 09:37:05.272211 | orchestrator | 09:37:05.272 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-09 09:37:05.272244 | orchestrator | 09:37:05.272 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:05.272260 | orchestrator | 09:37:05.272 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.272302 | orchestrator | 09:37:05.272 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.272549 | orchestrator | 09:37:05.272 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:05.272578 | orchestrator | 09:37:05.272 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-10-09 09:37:05.272694 | orchestrator | 09:37:05.272 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.272713 | orchestrator | 09:37:05.272 STDOUT terraform:  + size = 20 2025-10-09 09:37:05.272728 | orchestrator | 09:37:05.272 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:05.272746 | orchestrator | 09:37:05.272 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:05.272761 | orchestrator | 09:37:05.272 STDOUT terraform:  } 2025-10-09 09:37:05.272811 | orchestrator | 09:37:05.272 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-10-09 09:37:05.272863 | orchestrator | 09:37:05.272 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-09 09:37:05.272907 | orchestrator | 09:37:05.272 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:05.273154 | orchestrator | 09:37:05.272 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.273199 | orchestrator | 09:37:05.273 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.273215 | orchestrator | 09:37:05.273 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:05.273258 | orchestrator | 09:37:05.273 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-10-09 09:37:05.273433 | orchestrator | 09:37:05.273 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.273454 | orchestrator | 09:37:05.273 STDOUT terraform:  + size = 20 2025-10-09 09:37:05.273471 | orchestrator | 09:37:05.273 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:05.273488 | orchestrator | 09:37:05.273 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:05.273505 | orchestrator | 09:37:05.273 STDOUT terraform:  } 2025-10-09 09:37:05.273555 | orchestrator | 09:37:05.273 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-10-09 09:37:05.273876 | orchestrator | 09:37:05.273 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-09 09:37:05.273920 | orchestrator | 09:37:05.273 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:05.274088 | orchestrator | 09:37:05.273 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.274134 | orchestrator | 09:37:05.274 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.274350 | orchestrator | 09:37:05.274 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:05.274442 | orchestrator | 09:37:05.274 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-10-09 09:37:05.274735 | orchestrator | 09:37:05.274 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.274755 | orchestrator | 09:37:05.274 STDOUT terraform:  + size = 20 2025-10-09 09:37:05.274770 | orchestrator | 09:37:05.274 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:05.274809 | orchestrator | 09:37:05.274 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:05.274822 | orchestrator | 09:37:05.274 STDOUT terraform:  } 2025-10-09 09:37:05.275074 | orchestrator | 09:37:05.274 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-10-09 09:37:05.275253 | orchestrator | 09:37:05.275 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-10-09 09:37:05.275459 | orchestrator | 09:37:05.275 STDOUT terraform:  + attachment = (known after apply) 2025-10-09 09:37:05.275478 | orchestrator | 09:37:05.275 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.275528 | orchestrator | 09:37:05.275 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.275616 | orchestrator | 09:37:05.275 STDOUT terraform:  + metadata = (known after apply) 2025-10-09 09:37:05.275748 | orchestrator | 09:37:05.275 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-10-09 09:37:05.275801 | orchestrator | 09:37:05.275 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.275915 | orchestrator | 09:37:05.275 STDOUT terraform:  + size = 20 2025-10-09 09:37:05.275942 | orchestrator | 09:37:05.275 STDOUT terraform:  + volume_retype_policy = "never" 2025-10-09 09:37:05.275966 | orchestrator | 09:37:05.275 STDOUT terraform:  + volume_type = "ssd" 2025-10-09 09:37:05.275979 | orchestrator | 09:37:05.275 STDOUT terraform:  } 2025-10-09 09:37:05.276121 | orchestrator | 09:37:05.275 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-10-09 09:37:05.276200 | orchestrator | 09:37:05.276 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-10-09 09:37:05.276256 | orchestrator | 09:37:05.276 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-10-09 09:37:05.276314 | orchestrator | 09:37:05.276 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-10-09 09:37:05.276502 | orchestrator | 09:37:05.276 STDOUT terraform:  + all_metadata = (known after apply) 2025-10-09 09:37:05.276552 | orchestrator | 09:37:05.276 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:05.276585 | orchestrator | 09:37:05.276 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.276700 | orchestrator | 09:37:05.276 STDOUT terraform:  + config_drive = true 2025-10-09 09:37:05.276751 | orchestrator | 09:37:05.276 STDOUT terraform:  + created = (known after apply) 2025-10-09 09:37:05.276808 | orchestrator | 09:37:05.276 STDOUT terraform:  + flavor_id = (known after apply) 2025-10-09 09:37:05.276844 | orchestrator | 09:37:05.276 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-10-09 09:37:05.276961 | orchestrator | 09:37:05.276 STDOUT terraform:  + force_delete = false 2025-10-09 09:37:05.277044 | orchestrator | 09:37:05.276 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-10-09 09:37:05.277079 | orchestrator | 09:37:05.277 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.277294 | orchestrator | 09:37:05.277 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:05.277438 | orchestrator | 09:37:05.277 STDOUT terraform:  + image_name = (known after apply) 2025-10-09 09:37:05.277480 | orchestrator | 09:37:05.277 STDOUT terraform:  + key_pair = "testbed" 2025-10-09 09:37:05.277589 | orchestrator | 09:37:05.277 STDOUT terraform:  + name = "testbed-manager" 2025-10-09 09:37:05.277623 | orchestrator | 09:37:05.277 STDOUT terraform:  + power_state = "active" 2025-10-09 09:37:05.277818 | orchestrator | 09:37:05.277 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.277926 | orchestrator | 09:37:05.277 STDOUT terraform:  + security_groups = (known after apply) 2025-10-09 09:37:05.277960 | orchestrator | 09:37:05.277 STDOUT terraform:  + stop_before_destroy = false 2025-10-09 09:37:05.278081 | orchestrator | 09:37:05.277 STDOUT terraform:  + updated = (known after apply) 2025-10-09 09:37:05.278210 | orchestrator | 09:37:05.278 STDOUT terraform:  + user_data = (sensitive value) 2025-10-09 09:37:05.278370 | orchestrator | 09:37:05.278 STDOUT terraform:  + block_device { 2025-10-09 09:37:05.278401 | orchestrator | 09:37:05.278 STDOUT terraform:  + boot_index = 0 2025-10-09 09:37:05.278452 | orchestrator | 09:37:05.278 STDOUT terraform:  + delete_on_termination = false 2025-10-09 09:37:05.278730 | orchestrator | 09:37:05.278 STDOUT terraform:  + destination_type = "volume" 2025-10-09 09:37:05.278925 | orchestrator | 09:37:05.278 STDOUT terraform:  + multiattach = false 2025-10-09 09:37:05.279181 | orchestrator | 09:37:05.278 STDOUT terraform:  + source_type = "volume" 2025-10-09 09:37:05.279541 | orchestrator | 09:37:05.279 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:05.279705 | orchestrator | 09:37:05.279 STDOUT terraform:  } 2025-10-09 09:37:05.279718 | orchestrator | 09:37:05.279 STDOUT terraform:  + network { 2025-10-09 09:37:05.279745 | orchestrator | 09:37:05.279 STDOUT terraform:  + access_network = false 2025-10-09 09:37:05.280062 | orchestrator | 09:37:05.279 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-10-09 09:37:05.280123 | orchestrator | 09:37:05.280 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-10-09 09:37:05.280457 | orchestrator | 09:37:05.280 STDOUT terraform:  + mac = (known after apply) 2025-10-09 09:37:05.280556 | orchestrator | 09:37:05.280 STDOUT terraform:  + name = (known after apply) 2025-10-09 09:37:05.280661 | orchestrator | 09:37:05.280 STDOUT terraform:  + port = (known after apply) 2025-10-09 09:37:05.280724 | orchestrator | 09:37:05.280 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:05.280856 | orchestrator | 09:37:05.280 STDOUT terraform:  } 2025-10-09 09:37:05.280869 | orchestrator | 09:37:05.280 STDOUT terraform:  } 2025-10-09 09:37:05.281066 | orchestrator | 09:37:05.280 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-10-09 09:37:05.281306 | orchestrator | 09:37:05.281 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-10-09 09:37:05.281444 | orchestrator | 09:37:05.281 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-10-09 09:37:05.281861 | orchestrator | 09:37:05.281 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-10-09 09:37:05.282195 | orchestrator | 09:37:05.281 STDOUT terraform:  + all_metadata = (known after apply) 2025-10-09 09:37:05.282411 | orchestrator | 09:37:05.282 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:05.282620 | orchestrator | 09:37:05.282 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.282776 | orchestrator | 09:37:05.282 STDOUT terraform:  + config_drive = true 2025-10-09 09:37:05.283025 | orchestrator | 09:37:05.282 STDOUT terraform:  + created = (known after apply) 2025-10-09 09:37:05.283394 | orchestrator | 09:37:05.282 STDOUT terraform:  + flavor_id = (known after apply) 2025-10-09 09:37:05.283641 | orchestrator | 09:37:05.283 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-10-09 09:37:05.283860 | orchestrator | 09:37:05.283 STDOUT terraform:  + force_delete = false 2025-10-09 09:37:05.284220 | orchestrator | 09:37:05.283 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-10-09 09:37:05.284362 | orchestrator | 09:37:05.284 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.284467 | orchestrator | 09:37:05.284 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:05.284558 | orchestrator | 09:37:05.284 STDOUT terraform:  + image_name = (known after apply) 2025-10-09 09:37:05.284625 | orchestrator | 09:37:05.284 STDOUT terraform:  + key_pair = "testbed" 2025-10-09 09:37:05.284694 | orchestrator | 09:37:05.284 STDOUT terraform:  + name = "testbed-node-0" 2025-10-09 09:37:05.285118 | orchestrator | 09:37:05.284 STDOUT terraform:  + power_state = "active" 2025-10-09 09:37:05.285207 | orchestrator | 09:37:05.285 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.285418 | orchestrator | 09:37:05.285 STDOUT terraform:  + security_groups = (known after apply) 2025-10-09 09:37:05.285447 | orchestrator | 09:37:05.285 STDOUT terraform:  + stop_before_destroy = false 2025-10-09 09:37:05.285871 | orchestrator | 09:37:05.285 STDOUT terraform:  + updated = (known after apply) 2025-10-09 09:37:05.286491 | orchestrator | 09:37:05.285 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-10-09 09:37:05.286787 | orchestrator | 09:37:05.286 STDOUT terraform:  + block_device { 2025-10-09 09:37:05.286875 | orchestrator | 09:37:05.286 STDOUT terraform:  + boot_index = 0 2025-10-09 09:37:05.286913 | orchestrator | 09:37:05.286 STDOUT terraform:  + delete_on_termination = false 2025-10-09 09:37:05.287181 | orchestrator | 09:37:05.286 STDOUT terraform:  + destination_type = "volume" 2025-10-09 09:37:05.287787 | orchestrator | 09:37:05.287 STDOUT terraform:  + multiattach = false 2025-10-09 09:37:05.287872 | orchestrator | 09:37:05.287 STDOUT terraform:  + source_type = "volume" 2025-10-09 09:37:05.287912 | orchestrator | 09:37:05.287 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:05.287921 | orchestrator | 09:37:05.287 STDOUT terraform:  } 2025-10-09 09:37:05.287950 | orchestrator | 09:37:05.287 STDOUT terraform:  + network { 2025-10-09 09:37:05.287974 | orchestrator | 09:37:05.287 STDOUT terraform:  + access_network = false 2025-10-09 09:37:05.288002 | orchestrator | 09:37:05.287 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-10-09 09:37:05.288056 | orchestrator | 09:37:05.287 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-10-09 09:37:05.288088 | orchestrator | 09:37:05.288 STDOUT terraform:  + mac = (known after apply) 2025-10-09 09:37:05.288122 | orchestrator | 09:37:05.288 STDOUT terraform:  + name = (known after apply) 2025-10-09 09:37:05.288151 | orchestrator | 09:37:05.288 STDOUT terraform:  + port = (known after apply) 2025-10-09 09:37:05.288196 | orchestrator | 09:37:05.288 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:05.288203 | orchestrator | 09:37:05.288 STDOUT terraform:  } 2025-10-09 09:37:05.288212 | orchestrator | 09:37:05.288 STDOUT terraform:  } 2025-10-09 09:37:05.288274 | orchestrator | 09:37:05.288 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-10-09 09:37:05.288306 | orchestrator | 09:37:05.288 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-10-09 09:37:05.288343 | orchestrator | 09:37:05.288 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-10-09 09:37:05.288376 | orchestrator | 09:37:05.288 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-10-09 09:37:05.288410 | orchestrator | 09:37:05.288 STDOUT terraform:  + all_metadata = (known after apply) 2025-10-09 09:37:05.288450 | orchestrator | 09:37:05.288 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:05.288459 | orchestrator | 09:37:05.288 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.288497 | orchestrator | 09:37:05.288 STDOUT terraform:  + config_drive = true 2025-10-09 09:37:05.288524 | orchestrator | 09:37:05.288 STDOUT terraform:  + created = (known after apply) 2025-10-09 09:37:05.288557 | orchestrator | 09:37:05.288 STDOUT terraform:  + flavor_id = (known after apply) 2025-10-09 09:37:05.288579 | orchestrator | 09:37:05.288 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-10-09 09:37:05.288606 | orchestrator | 09:37:05.288 STDOUT terraform:  + force_delete = false 2025-10-09 09:37:05.288632 | orchestrator | 09:37:05.288 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-10-09 09:37:05.288671 | orchestrator | 09:37:05.288 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.288702 | orchestrator | 09:37:05.288 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:05.288752 | orchestrator | 09:37:05.288 STDOUT terraform:  + image_name = (known after apply) 2025-10-09 09:37:05.288761 | orchestrator | 09:37:05.288 STDOUT terraform:  + key_pair = "testbed" 2025-10-09 09:37:05.288793 | orchestrator | 09:37:05.288 STDOUT terraform:  + name = "testbed-node-1" 2025-10-09 09:37:05.288802 | orchestrator | 09:37:05.288 STDOUT terraform:  + power_state = "active" 2025-10-09 09:37:05.288839 | orchestrator | 09:37:05.288 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.288877 | orchestrator | 09:37:05.288 STDOUT terraform:  + security_groups = (known after apply) 2025-10-09 09:37:05.288896 | orchestrator | 09:37:05.288 STDOUT terraform:  + stop_before_destroy = false 2025-10-09 09:37:05.288930 | orchestrator | 09:37:05.288 STDOUT terraform:  + updated = (known after apply) 2025-10-09 09:37:05.288977 | orchestrator | 09:37:05.288 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-10-09 09:37:05.288987 | orchestrator | 09:37:05.288 STDOUT terraform:  + block_device { 2025-10-09 09:37:05.289021 | orchestrator | 09:37:05.288 STDOUT terraform:  + boot_index = 0 2025-10-09 09:37:05.289048 | orchestrator | 09:37:05.289 STDOUT terraform:  + delete_on_termination = false 2025-10-09 09:37:05.289081 | orchestrator | 09:37:05.289 STDOUT terraform:  + destination_type = "volume" 2025-10-09 09:37:05.289106 | orchestrator | 09:37:05.289 STDOUT terraform:  + multiattach = false 2025-10-09 09:37:05.289131 | orchestrator | 09:37:05.289 STDOUT terraform:  + source_type = "volume" 2025-10-09 09:37:05.289173 | orchestrator | 09:37:05.289 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:05.289182 | orchestrator | 09:37:05.289 STDOUT terraform:  } 2025-10-09 09:37:05.289188 | orchestrator | 09:37:05.289 STDOUT terraform:  + network { 2025-10-09 09:37:05.289205 | orchestrator | 09:37:05.289 STDOUT terraform:  + access_network = false 2025-10-09 09:37:05.289235 | orchestrator | 09:37:05.289 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-10-09 09:37:05.289265 | orchestrator | 09:37:05.289 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-10-09 09:37:05.289296 | orchestrator | 09:37:05.289 STDOUT terraform:  + mac = (known after apply) 2025-10-09 09:37:05.289325 | orchestrator | 09:37:05.289 STDOUT terraform:  + name = (known after apply) 2025-10-09 09:37:05.289359 | orchestrator | 09:37:05.289 STDOUT terraform:  + port = (known after apply) 2025-10-09 09:37:05.289386 | orchestrator | 09:37:05.289 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:05.289395 | orchestrator | 09:37:05.289 STDOUT terraform:  } 2025-10-09 09:37:05.289402 | orchestrator | 09:37:05.289 STDOUT terraform:  } 2025-10-09 09:37:05.289450 | orchestrator | 09:37:05.289 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-10-09 09:37:05.289489 | orchestrator | 09:37:05.289 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-10-09 09:37:05.289523 | orchestrator | 09:37:05.289 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-10-09 09:37:05.289555 | orchestrator | 09:37:05.289 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-10-09 09:37:05.289593 | orchestrator | 09:37:05.289 STDOUT terraform:  + all_metadata = (known after apply) 2025-10-09 09:37:05.289622 | orchestrator | 09:37:05.289 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:05.289645 | orchestrator | 09:37:05.289 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.289667 | orchestrator | 09:37:05.289 STDOUT terraform:  + config_drive = true 2025-10-09 09:37:05.289701 | orchestrator | 09:37:05.289 STDOUT terraform:  + created = (known after apply) 2025-10-09 09:37:05.289735 | orchestrator | 09:37:05.289 STDOUT terraform:  + flavor_id = (known after apply) 2025-10-09 09:37:05.289762 | orchestrator | 09:37:05.289 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-10-09 09:37:05.289776 | orchestrator | 09:37:05.289 STDOUT terraform:  + force_delete = false 2025-10-09 09:37:05.289813 | orchestrator | 09:37:05.289 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-10-09 09:37:05.289843 | orchestrator | 09:37:05.289 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.289877 | orchestrator | 09:37:05.289 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:05.289910 | orchestrator | 09:37:05.289 STDOUT terraform:  + image_name = (known after apply) 2025-10-09 09:37:05.289934 | orchestrator | 09:37:05.289 STDOUT terraform:  + key_pair = "testbed" 2025-10-09 09:37:05.289964 | orchestrator | 09:37:05.289 STDOUT terraform:  + name = "testbed-node-2" 2025-10-09 09:37:05.289987 | orchestrator | 09:37:05.289 STDOUT terraform:  + power_state = "active" 2025-10-09 09:37:05.290076 | orchestrator | 09:37:05.289 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.290240 | orchestrator | 09:37:05.290 STDOUT terraform:  + security_groups = (known after apply) 2025-10-09 09:37:05.290250 | orchestrator | 09:37:05.290 STDOUT terraform:  + stop_before_destroy = false 2025-10-09 09:37:05.290256 | orchestrator | 09:37:05.290 STDOUT terraform:  + updated = (known after apply) 2025-10-09 09:37:05.290261 | orchestrator | 09:37:05.290 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-10-09 09:37:05.290267 | orchestrator | 09:37:05.290 STDOUT terraform:  + block_device { 2025-10-09 09:37:05.290272 | orchestrator | 09:37:05.290 STDOUT terraform:  + boot_index = 0 2025-10-09 09:37:05.290280 | orchestrator | 09:37:05.290 STDOUT terraform:  + delete_on_termination = false 2025-10-09 09:37:05.290292 | orchestrator | 09:37:05.290 STDOUT terraform:  + destination_type = "volume" 2025-10-09 09:37:05.290299 | orchestrator | 09:37:05.290 STDOUT terraform:  + multiattach = false 2025-10-09 09:37:05.294098 | orchestrator | 09:37:05.290 STDOUT terraform:  + source_type = "volume" 2025-10-09 09:37:05.294131 | orchestrator | 09:37:05.290 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:05.294137 | orchestrator | 09:37:05.290 STDOUT terraform:  } 2025-10-09 09:37:05.294142 | orchestrator | 09:37:05.290 STDOUT terraform:  + network { 2025-10-09 09:37:05.294148 | orchestrator | 09:37:05.290 STDOUT terraform:  + access_network = false 2025-10-09 09:37:05.294153 | orchestrator | 09:37:05.290 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-10-09 09:37:05.294158 | orchestrator | 09:37:05.290 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-10-09 09:37:05.294164 | orchestrator | 09:37:05.290 STDOUT terraform:  + mac = (known after apply) 2025-10-09 09:37:05.294169 | orchestrator | 09:37:05.290 STDOUT terraform:  + name = (known after apply) 2025-10-09 09:37:05.294174 | orchestrator | 09:37:05.290 STDOUT terraform:  + port = (known after apply) 2025-10-09 09:37:05.294179 | orchestrator | 09:37:05.290 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:05.294184 | orchestrator | 09:37:05.290 STDOUT terraform:  } 2025-10-09 09:37:05.294189 | orchestrator | 09:37:05.290 STDOUT terraform:  } 2025-10-09 09:37:05.294204 | orchestrator | 09:37:05.290 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-10-09 09:37:05.294209 | orchestrator | 09:37:05.290 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-10-09 09:37:05.294214 | orchestrator | 09:37:05.290 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-10-09 09:37:05.294219 | orchestrator | 09:37:05.290 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-10-09 09:37:05.294224 | orchestrator | 09:37:05.290 STDOUT terraform:  + all_metadata = (known after apply) 2025-10-09 09:37:05.294235 | orchestrator | 09:37:05.290 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:05.294240 | orchestrator | 09:37:05.290 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.294245 | orchestrator | 09:37:05.290 STDOUT terraform:  + config_drive = true 2025-10-09 09:37:05.294251 | orchestrator | 09:37:05.290 STDOUT terraform:  + created = (known after apply) 2025-10-09 09:37:05.294256 | orchestrator | 09:37:05.290 STDOUT terraform:  + flavor_id = (known after apply) 2025-10-09 09:37:05.294261 | orchestrator | 09:37:05.290 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-10-09 09:37:05.294266 | orchestrator | 09:37:05.290 STDOUT terraform:  + force_delete = false 2025-10-09 09:37:05.294270 | orchestrator | 09:37:05.290 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-10-09 09:37:05.294275 | orchestrator | 09:37:05.290 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.294280 | orchestrator | 09:37:05.290 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:05.294285 | orchestrator | 09:37:05.290 STDOUT terraform:  + image_name = (known after apply) 2025-10-09 09:37:05.294290 | orchestrator | 09:37:05.291 STDOUT terraform:  + key_pair = "testbed" 2025-10-09 09:37:05.294295 | orchestrator | 09:37:05.291 STDOUT terraform:  + name = "testbed-node-3" 2025-10-09 09:37:05.294300 | orchestrator | 09:37:05.291 STDOUT terraform:  + power_state = "active" 2025-10-09 09:37:05.294305 | orchestrator | 09:37:05.291 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.294310 | orchestrator | 09:37:05.291 STDOUT terraform:  + security_groups = (known after apply) 2025-10-09 09:37:05.294315 | orchestrator | 09:37:05.291 STDOUT terraform:  + stop_before_destroy = false 2025-10-09 09:37:05.294320 | orchestrator | 09:37:05.291 STDOUT terraform:  + updated = (known after apply) 2025-10-09 09:37:05.294325 | orchestrator | 09:37:05.291 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-10-09 09:37:05.294330 | orchestrator | 09:37:05.291 STDOUT terraform:  + block_device { 2025-10-09 09:37:05.294343 | orchestrator | 09:37:05.291 STDOUT terraform:  + boot_index = 0 2025-10-09 09:37:05.294349 | orchestrator | 09:37:05.291 STDOUT terraform:  + delete_on_termination = false 2025-10-09 09:37:05.294354 | orchestrator | 09:37:05.291 STDOUT terraform:  + destination_type = "volume" 2025-10-09 09:37:05.294359 | orchestrator | 09:37:05.291 STDOUT terraform:  + multiattach = false 2025-10-09 09:37:05.294367 | orchestrator | 09:37:05.291 STDOUT terraform:  + source_type = "volume" 2025-10-09 09:37:05.294375 | orchestrator | 09:37:05.291 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:05.294380 | orchestrator | 09:37:05.291 STDOUT terraform:  } 2025-10-09 09:37:05.294385 | orchestrator | 09:37:05.291 STDOUT terraform:  + network { 2025-10-09 09:37:05.294391 | orchestrator | 09:37:05.291 STDOUT terraform:  + access_network = false 2025-10-09 09:37:05.294396 | orchestrator | 09:37:05.291 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-10-09 09:37:05.294401 | orchestrator | 09:37:05.291 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-10-09 09:37:05.294406 | orchestrator | 09:37:05.291 STDOUT terraform:  + mac = (known after apply) 2025-10-09 09:37:05.294411 | orchestrator | 09:37:05.291 STDOUT terraform:  + name = (known after apply) 2025-10-09 09:37:05.294416 | orchestrator | 09:37:05.291 STDOUT terraform:  + port = (known after apply) 2025-10-09 09:37:05.294421 | orchestrator | 09:37:05.291 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:05.294425 | orchestrator | 09:37:05.291 STDOUT terraform:  } 2025-10-09 09:37:05.294430 | orchestrator | 09:37:05.291 STDOUT terraform:  } 2025-10-09 09:37:05.294435 | orchestrator | 09:37:05.291 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-10-09 09:37:05.294440 | orchestrator | 09:37:05.291 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-10-09 09:37:05.294445 | orchestrator | 09:37:05.291 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-10-09 09:37:05.294450 | orchestrator | 09:37:05.291 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-10-09 09:37:05.294455 | orchestrator | 09:37:05.291 STDOUT terraform:  + all_metadata = (known after apply) 2025-10-09 09:37:05.294460 | orchestrator | 09:37:05.291 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:05.294465 | orchestrator | 09:37:05.291 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.294470 | orchestrator | 09:37:05.291 STDOUT terraform:  + config_drive = true 2025-10-09 09:37:05.294475 | orchestrator | 09:37:05.291 STDOUT terraform:  + created = (known after apply) 2025-10-09 09:37:05.294480 | orchestrator | 09:37:05.291 STDOUT terraform:  + flavor_id = (known after apply) 2025-10-09 09:37:05.294485 | orchestrator | 09:37:05.291 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-10-09 09:37:05.294490 | orchestrator | 09:37:05.291 STDOUT terraform:  + force_delete = false 2025-10-09 09:37:05.294495 | orchestrator | 09:37:05.291 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-10-09 09:37:05.294500 | orchestrator | 09:37:05.291 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.294505 | orchestrator | 09:37:05.291 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:05.294510 | orchestrator | 09:37:05.292 STDOUT terraform:  + image_name = (known after apply) 2025-10-09 09:37:05.294515 | orchestrator | 09:37:05.292 STDOUT terraform:  + key_pair = "testbed" 2025-10-09 09:37:05.294526 | orchestrator | 09:37:05.292 STDOUT terraform:  + name = "testbed-node-4" 2025-10-09 09:37:05.294531 | orchestrator | 09:37:05.292 STDOUT terraform:  + power_state = "active" 2025-10-09 09:37:05.294537 | orchestrator | 09:37:05.292 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.294542 | orchestrator | 09:37:05.292 STDOUT terraform:  + security_groups = (known after apply) 2025-10-09 09:37:05.294549 | orchestrator | 09:37:05.292 STDOUT terraform:  + stop_before_destroy = false 2025-10-09 09:37:05.294555 | orchestrator | 09:37:05.292 STDOUT terraform:  + updated = (known after apply) 2025-10-09 09:37:05.294560 | orchestrator | 09:37:05.292 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-10-09 09:37:05.294565 | orchestrator | 09:37:05.292 STDOUT terraform:  + block_device { 2025-10-09 09:37:05.294570 | orchestrator | 09:37:05.292 STDOUT terraform:  + boot_index = 0 2025-10-09 09:37:05.294574 | orchestrator | 09:37:05.292 STDOUT terraform:  + delete_on_termination = false 2025-10-09 09:37:05.294579 | orchestrator | 09:37:05.292 STDOUT terraform:  + destination_type = "volume" 2025-10-09 09:37:05.294586 | orchestrator | 09:37:05.292 STDOUT terraform:  + multiattach = false 2025-10-09 09:37:05.294590 | orchestrator | 09:37:05.292 STDOUT terraform:  + source_type = "volume" 2025-10-09 09:37:05.294595 | orchestrator | 09:37:05.292 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:05.294600 | orchestrator | 09:37:05.292 STDOUT terraform:  } 2025-10-09 09:37:05.294604 | orchestrator | 09:37:05.292 STDOUT terraform:  + network { 2025-10-09 09:37:05.294609 | orchestrator | 09:37:05.292 STDOUT terraform:  + access_network = false 2025-10-09 09:37:05.294613 | orchestrator | 09:37:05.292 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-10-09 09:37:05.294618 | orchestrator | 09:37:05.292 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-10-09 09:37:05.294622 | orchestrator | 09:37:05.292 STDOUT terraform:  + mac = (known after apply) 2025-10-09 09:37:05.294627 | orchestrator | 09:37:05.292 STDOUT terraform:  + name = (known after apply) 2025-10-09 09:37:05.294632 | orchestrator | 09:37:05.292 STDOUT terraform:  + port = (known after apply) 2025-10-09 09:37:05.294636 | orchestrator | 09:37:05.292 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:05.294641 | orchestrator | 09:37:05.292 STDOUT terraform:  } 2025-10-09 09:37:05.294645 | orchestrator | 09:37:05.292 STDOUT terraform:  } 2025-10-09 09:37:05.294650 | orchestrator | 09:37:05.292 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-10-09 09:37:05.294654 | orchestrator | 09:37:05.292 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-10-09 09:37:05.294659 | orchestrator | 09:37:05.292 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-10-09 09:37:05.294664 | orchestrator | 09:37:05.292 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-10-09 09:37:05.294668 | orchestrator | 09:37:05.292 STDOUT terraform:  + all_metadata = (known after apply) 2025-10-09 09:37:05.294676 | orchestrator | 09:37:05.292 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:05.294680 | orchestrator | 09:37:05.292 STDOUT terraform:  + availability_zone = "nova" 2025-10-09 09:37:05.294685 | orchestrator | 09:37:05.292 STDOUT terraform:  + config_drive = true 2025-10-09 09:37:05.294689 | orchestrator | 09:37:05.292 STDOUT terraform:  + created = (known after apply) 2025-10-09 09:37:05.294694 | orchestrator | 09:37:05.292 STDOUT terraform:  + flavor_id = (known after apply) 2025-10-09 09:37:05.294699 | orchestrator | 09:37:05.292 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-10-09 09:37:05.294703 | orchestrator | 09:37:05.292 STDOUT terraform:  + force_delete = false 2025-10-09 09:37:05.294708 | orchestrator | 09:37:05.292 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-10-09 09:37:05.294712 | orchestrator | 09:37:05.292 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.294717 | orchestrator | 09:37:05.292 STDOUT terraform:  + image_id = (known after apply) 2025-10-09 09:37:05.294721 | orchestrator | 09:37:05.293 STDOUT terraform:  + image_name = (known after apply) 2025-10-09 09:37:05.294726 | orchestrator | 09:37:05.293 STDOUT terraform:  + key_pair = "testbed" 2025-10-09 09:37:05.294733 | orchestrator | 09:37:05.293 STDOUT terraform:  + name = "testbed-node-5" 2025-10-09 09:37:05.294738 | orchestrator | 09:37:05.293 STDOUT terraform:  + power_state = "active" 2025-10-09 09:37:05.294742 | orchestrator | 09:37:05.293 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.294747 | orchestrator | 09:37:05.293 STDOUT terraform:  + security_groups = (known after apply) 2025-10-09 09:37:05.294751 | orchestrator | 09:37:05.293 STDOUT terraform:  + stop_before_destroy = false 2025-10-09 09:37:05.294756 | orchestrator | 09:37:05.293 STDOUT terraform:  + updated = (known after apply) 2025-10-09 09:37:05.294760 | orchestrator | 09:37:05.293 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-10-09 09:37:05.294765 | orchestrator | 09:37:05.293 STDOUT terraform:  + block_device { 2025-10-09 09:37:05.294769 | orchestrator | 09:37:05.293 STDOUT terraform:  + boot_index = 0 2025-10-09 09:37:05.294774 | orchestrator | 09:37:05.293 STDOUT terraform:  + delete_on_termination = false 2025-10-09 09:37:05.294778 | orchestrator | 09:37:05.293 STDOUT terraform:  + destination_type = "volume" 2025-10-09 09:37:05.294783 | orchestrator | 09:37:05.293 STDOUT terraform:  + multiattach = false 2025-10-09 09:37:05.294787 | orchestrator | 09:37:05.293 STDOUT terraform:  + source_type = "volume" 2025-10-09 09:37:05.294792 | orchestrator | 09:37:05.293 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:05.294796 | orchestrator | 09:37:05.293 STDOUT terraform:  } 2025-10-09 09:37:05.294801 | orchestrator | 09:37:05.293 STDOUT terraform:  + network { 2025-10-09 09:37:05.294805 | orchestrator | 09:37:05.293 STDOUT terraform:  + access_network = false 2025-10-09 09:37:05.294810 | orchestrator | 09:37:05.293 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-10-09 09:37:05.294815 | orchestrator | 09:37:05.293 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-10-09 09:37:05.294822 | orchestrator | 09:37:05.293 STDOUT terraform:  + mac = (known after apply) 2025-10-09 09:37:05.294827 | orchestrator | 09:37:05.293 STDOUT terraform:  + name = (known after apply) 2025-10-09 09:37:05.294831 | orchestrator | 09:37:05.293 STDOUT terraform:  + port = (known after apply) 2025-10-09 09:37:05.294836 | orchestrator | 09:37:05.293 STDOUT terraform:  + uuid = (known after apply) 2025-10-09 09:37:05.294840 | orchestrator | 09:37:05.293 STDOUT terraform:  } 2025-10-09 09:37:05.294845 | orchestrator | 09:37:05.293 STDOUT terraform:  } 2025-10-09 09:37:05.294850 | orchestrator | 09:37:05.293 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-10-09 09:37:05.294854 | orchestrator | 09:37:05.293 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-10-09 09:37:05.294859 | orchestrator | 09:37:05.293 STDOUT terraform:  + fingerprint = (known after apply) 2025-10-09 09:37:05.294863 | orchestrator | 09:37:05.293 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.294868 | orchestrator | 09:37:05.293 STDOUT terraform:  + name = "testbed" 2025-10-09 09:37:05.294872 | orchestrator | 09:37:05.293 STDOUT terraform:  + private_key = (sensitive value) 2025-10-09 09:37:05.294877 | orchestrator | 09:37:05.293 STDOUT terraform:  + public_key = (known after apply) 2025-10-09 09:37:05.294881 | orchestrator | 09:37:05.293 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.294886 | orchestrator | 09:37:05.293 STDOUT terraform:  + user_id = (known after apply) 2025-10-09 09:37:05.294890 | orchestrator | 09:37:05.293 STDOUT terraform:  } 2025-10-09 09:37:05.294895 | orchestrator | 09:37:05.293 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-10-09 09:37:05.294900 | orchestrator | 09:37:05.293 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-09 09:37:05.294904 | orchestrator | 09:37:05.293 STDOUT terraform:  + device = (known after apply) 2025-10-09 09:37:05.315721 | orchestrator | 09:37:05.315 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.315772 | orchestrator | 09:37:05.315 STDOUT terraform:  + instance_id = (known after apply) 2025-10-09 09:37:05.315778 | orchestrator | 09:37:05.315 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.315785 | orchestrator | 09:37:05.315 STDOUT terraform:  + volume_id = (known after apply) 2025-10-09 09:37:05.315794 | orchestrator | 09:37:05.315 STDOUT terraform:  } 2025-10-09 09:37:05.315869 | orchestrator | 09:37:05.315 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-10-09 09:37:05.315915 | orchestrator | 09:37:05.315 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-09 09:37:05.315943 | orchestrator | 09:37:05.315 STDOUT terraform:  + device = (known after apply) 2025-10-09 09:37:05.315976 | orchestrator | 09:37:05.315 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.316019 | orchestrator | 09:37:05.315 STDOUT terraform:  + instance_id = (known after apply) 2025-10-09 09:37:05.316059 | orchestrator | 09:37:05.316 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.316089 | orchestrator | 09:37:05.316 STDOUT terraform:  + volume_id = (known after apply) 2025-10-09 09:37:05.316096 | orchestrator | 09:37:05.316 STDOUT terraform:  } 2025-10-09 09:37:05.316153 | orchestrator | 09:37:05.316 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-10-09 09:37:05.316201 | orchestrator | 09:37:05.316 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-09 09:37:05.316233 | orchestrator | 09:37:05.316 STDOUT terraform:  + device = (known after apply) 2025-10-09 09:37:05.316267 | orchestrator | 09:37:05.316 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.316296 | orchestrator | 09:37:05.316 STDOUT terraform:  + instance_id = (known after apply) 2025-10-09 09:37:05.316326 | orchestrator | 09:37:05.316 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.316354 | orchestrator | 09:37:05.316 STDOUT terraform:  + volume_id = (known after apply) 2025-10-09 09:37:05.316361 | orchestrator | 09:37:05.316 STDOUT terraform:  } 2025-10-09 09:37:05.316413 | orchestrator | 09:37:05.316 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-10-09 09:37:05.316465 | orchestrator | 09:37:05.316 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-09 09:37:05.316496 | orchestrator | 09:37:05.316 STDOUT terraform:  + device = (known after apply) 2025-10-09 09:37:05.316524 | orchestrator | 09:37:05.316 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.326262 | orchestrator | 09:37:05.317 STDOUT terraform:  + instance_id = (known after apply) 2025-10-09 09:37:05.326301 | orchestrator | 09:37:05.326 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.326337 | orchestrator | 09:37:05.326 STDOUT terraform:  + volume_id = (known after apply) 2025-10-09 09:37:05.326359 | orchestrator | 09:37:05.326 STDOUT terraform:  } 2025-10-09 09:37:05.326416 | orchestrator | 09:37:05.326 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-10-09 09:37:05.326469 | orchestrator | 09:37:05.326 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-09 09:37:05.326502 | orchestrator | 09:37:05.326 STDOUT terraform:  + device = (known after apply) 2025-10-09 09:37:05.326537 | orchestrator | 09:37:05.326 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.326567 | orchestrator | 09:37:05.326 STDOUT terraform:  + instance_id = (known after apply) 2025-10-09 09:37:05.326601 | orchestrator | 09:37:05.326 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.326634 | orchestrator | 09:37:05.326 STDOUT terraform:  + volume_id = (known after apply) 2025-10-09 09:37:05.326654 | orchestrator | 09:37:05.326 STDOUT terraform:  } 2025-10-09 09:37:05.326707 | orchestrator | 09:37:05.326 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-10-09 09:37:05.326758 | orchestrator | 09:37:05.326 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-09 09:37:05.326790 | orchestrator | 09:37:05.326 STDOUT terraform:  + device = (known after apply) 2025-10-09 09:37:05.326824 | orchestrator | 09:37:05.326 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.326856 | orchestrator | 09:37:05.326 STDOUT terraform:  + instance_id = (known after apply) 2025-10-09 09:37:05.326892 | orchestrator | 09:37:05.326 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.326918 | orchestrator | 09:37:05.326 STDOUT terraform:  + volume_id = (known after apply) 2025-10-09 09:37:05.326937 | orchestrator | 09:37:05.326 STDOUT terraform:  } 2025-10-09 09:37:05.326988 | orchestrator | 09:37:05.326 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-10-09 09:37:05.327076 | orchestrator | 09:37:05.326 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-09 09:37:05.327101 | orchestrator | 09:37:05.327 STDOUT terraform:  + device = (known after apply) 2025-10-09 09:37:05.327137 | orchestrator | 09:37:05.327 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.327168 | orchestrator | 09:37:05.327 STDOUT terraform:  + instance_id = (known after apply) 2025-10-09 09:37:05.327200 | orchestrator | 09:37:05.327 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.327232 | orchestrator | 09:37:05.327 STDOUT terraform:  + volume_id = (known after apply) 2025-10-09 09:37:05.327240 | orchestrator | 09:37:05.327 STDOUT terraform:  } 2025-10-09 09:37:05.327299 | orchestrator | 09:37:05.327 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-10-09 09:37:05.327351 | orchestrator | 09:37:05.327 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-09 09:37:05.327384 | orchestrator | 09:37:05.327 STDOUT terraform:  + device = (known after apply) 2025-10-09 09:37:05.327416 | orchestrator | 09:37:05.327 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.327449 | orchestrator | 09:37:05.327 STDOUT terraform:  + instance_id = (known after apply) 2025-10-09 09:37:05.327480 | orchestrator | 09:37:05.327 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.327511 | orchestrator | 09:37:05.327 STDOUT terraform:  + volume_id = (known after apply) 2025-10-09 09:37:05.327518 | orchestrator | 09:37:05.327 STDOUT terraform:  } 2025-10-09 09:37:05.327575 | orchestrator | 09:37:05.327 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-10-09 09:37:05.327628 | orchestrator | 09:37:05.327 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-10-09 09:37:05.327660 | orchestrator | 09:37:05.327 STDOUT terraform:  + device = (known after apply) 2025-10-09 09:37:05.327691 | orchestrator | 09:37:05.327 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.327724 | orchestrator | 09:37:05.327 STDOUT terraform:  + instance_id = (known after apply) 2025-10-09 09:37:05.327756 | orchestrator | 09:37:05.327 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.327790 | orchestrator | 09:37:05.327 STDOUT terraform:  + volume_id = (known after apply) 2025-10-09 09:37:05.327797 | orchestrator | 09:37:05.327 STDOUT terraform:  } 2025-10-09 09:37:05.327863 | orchestrator | 09:37:05.327 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-10-09 09:37:05.327925 | orchestrator | 09:37:05.327 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-10-09 09:37:05.327957 | orchestrator | 09:37:05.327 STDOUT terraform:  + fixed_ip = (known after apply) 2025-10-09 09:37:05.327990 | orchestrator | 09:37:05.327 STDOUT terraform:  + floating_ip = (known after apply) 2025-10-09 09:37:05.328043 | orchestrator | 09:37:05.327 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.328064 | orchestrator | 09:37:05.328 STDOUT terraform:  + port_id = (known after apply) 2025-10-09 09:37:05.328095 | orchestrator | 09:37:05.328 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.328113 | orchestrator | 09:37:05.328 STDOUT terraform:  } 2025-10-09 09:37:05.328163 | orchestrator | 09:37:05.328 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-10-09 09:37:05.328215 | orchestrator | 09:37:05.328 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-10-09 09:37:05.328242 | orchestrator | 09:37:05.328 STDOUT terraform:  + address = (known after apply) 2025-10-09 09:37:05.328272 | orchestrator | 09:37:05.328 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:05.328300 | orchestrator | 09:37:05.328 STDOUT terraform:  + dns_domain = (known after apply) 2025-10-09 09:37:05.328334 | orchestrator | 09:37:05.328 STDOUT terraform:  + dns_name = (known after apply) 2025-10-09 09:37:05.328368 | orchestrator | 09:37:05.328 STDOUT terraform:  + fixed_ip = (known after apply) 2025-10-09 09:37:05.328395 | orchestrator | 09:37:05.328 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.328424 | orchestrator | 09:37:05.328 STDOUT terraform:  + pool = "public" 2025-10-09 09:37:05.328447 | orchestrator | 09:37:05.328 STDOUT terraform:  + port_id = (known after apply) 2025-10-09 09:37:05.328476 | orchestrator | 09:37:05.328 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.328507 | orchestrator | 09:37:05.328 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-09 09:37:05.328537 | orchestrator | 09:37:05.328 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.328544 | orchestrator | 09:37:05.328 STDOUT terraform:  } 2025-10-09 09:37:05.328595 | orchestrator | 09:37:05.328 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-10-09 09:37:05.328642 | orchestrator | 09:37:05.328 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-10-09 09:37:05.328686 | orchestrator | 09:37:05.328 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-09 09:37:05.328725 | orchestrator | 09:37:05.328 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:05.328752 | orchestrator | 09:37:05.328 STDOUT terraform:  + availability_zone_hints = [ 2025-10-09 09:37:05.328760 | orchestrator | 09:37:05.328 STDOUT terraform:  + "nova", 2025-10-09 09:37:05.328781 | orchestrator | 09:37:05.328 STDOUT terraform:  ] 2025-10-09 09:37:05.328822 | orchestrator | 09:37:05.328 STDOUT terraform:  + dns_domain = (known after apply) 2025-10-09 09:37:05.328862 | orchestrator | 09:37:05.328 STDOUT terraform:  + external = (known after apply) 2025-10-09 09:37:05.328903 | orchestrator | 09:37:05.328 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.328942 | orchestrator | 09:37:05.328 STDOUT terraform:  + mtu = (known after apply) 2025-10-09 09:37:05.328986 | orchestrator | 09:37:05.328 STDOUT terraform:  + name = "net-testbed-management" 2025-10-09 09:37:05.329037 | orchestrator | 09:37:05.328 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-09 09:37:05.329077 | orchestrator | 09:37:05.329 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-09 09:37:05.329117 | orchestrator | 09:37:05.329 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.329160 | orchestrator | 09:37:05.329 STDOUT terraform:  + shared = (known after apply) 2025-10-09 09:37:05.329199 | orchestrator | 09:37:05.329 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.329239 | orchestrator | 09:37:05.329 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-10-09 09:37:05.329266 | orchestrator | 09:37:05.329 STDOUT terraform:  + segments (known after apply) 2025-10-09 09:37:05.329286 | orchestrator | 09:37:05.329 STDOUT terraform:  } 2025-10-09 09:37:05.329334 | orchestrator | 09:37:05.329 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-10-09 09:37:05.329384 | orchestrator | 09:37:05.329 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-10-09 09:37:05.329423 | orchestrator | 09:37:05.329 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-09 09:37:05.329463 | orchestrator | 09:37:05.329 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-10-09 09:37:05.329502 | orchestrator | 09:37:05.329 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-10-09 09:37:05.329546 | orchestrator | 09:37:05.329 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:05.329581 | orchestrator | 09:37:05.329 STDOUT terraform:  + device_id = (known after apply) 2025-10-09 09:37:05.329621 | orchestrator | 09:37:05.329 STDOUT terraform:  + device_owner = (known after apply) 2025-10-09 09:37:05.329662 | orchestrator | 09:37:05.329 STDOUT terraform:  + dns_assignment = (known after apply) 2025-10-09 09:37:05.329701 | orchestrator | 09:37:05.329 STDOUT terraform:  + dns_name = (known after apply) 2025-10-09 09:37:05.329743 | orchestrator | 09:37:05.329 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.329780 | orchestrator | 09:37:05.329 STDOUT terraform:  + mac_address = (known after apply) 2025-10-09 09:37:05.329820 | orchestrator | 09:37:05.329 STDOUT terraform:  + network_id = (known after apply) 2025-10-09 09:37:05.329858 | orchestrator | 09:37:05.329 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-09 09:37:05.329900 | orchestrator | 09:37:05.329 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-09 09:37:05.329938 | orchestrator | 09:37:05.329 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.329976 | orchestrator | 09:37:05.329 STDOUT terraform:  + security_group_ids = (known after apply) 2025-10-09 09:37:05.330056 | orchestrator | 09:37:05.329 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.330092 | orchestrator | 09:37:05.330 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:05.330125 | orchestrator | 09:37:05.330 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-10-09 09:37:05.330145 | orchestrator | 09:37:05.330 STDOUT terraform:  } 2025-10-09 09:37:05.330173 | orchestrator | 09:37:05.330 STDOUT terraform:  + binding (known after apply) 2025-10-09 09:37:05.330191 | orchestrator | 09:37:05.330 STDOUT terraform:  + fixed_ip { 2025-10-09 09:37:05.330219 | orchestrator | 09:37:05.330 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-10-09 09:37:05.330251 | orchestrator | 09:37:05.330 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-09 09:37:05.330271 | orchestrator | 09:37:05.330 STDOUT terraform:  } 2025-10-09 09:37:05.330278 | orchestrator | 09:37:05.330 STDOUT terraform:  } 2025-10-09 09:37:05.330347 | orchestrator | 09:37:05.330 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-10-09 09:37:05.330393 | orchestrator | 09:37:05.330 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-10-09 09:37:05.330437 | orchestrator | 09:37:05.330 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-09 09:37:05.330474 | orchestrator | 09:37:05.330 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-10-09 09:37:05.330514 | orchestrator | 09:37:05.330 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-10-09 09:37:05.330553 | orchestrator | 09:37:05.330 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:05.330594 | orchestrator | 09:37:05.330 STDOUT terraform:  + device_id = (known after apply) 2025-10-09 09:37:05.330631 | orchestrator | 09:37:05.330 STDOUT terraform:  + device_owner = (known after apply) 2025-10-09 09:37:05.330672 | orchestrator | 09:37:05.330 STDOUT terraform:  + dns_assignment = (known after apply) 2025-10-09 09:37:05.330711 | orchestrator | 09:37:05.330 STDOUT terraform:  + dns_name = (known after apply) 2025-10-09 09:37:05.330753 | orchestrator | 09:37:05.330 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.330791 | orchestrator | 09:37:05.330 STDOUT terraform:  + mac_address = (known after apply) 2025-10-09 09:37:05.330834 | orchestrator | 09:37:05.330 STDOUT terraform:  + network_id = (known after apply) 2025-10-09 09:37:05.330871 | orchestrator | 09:37:05.330 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-09 09:37:05.330912 | orchestrator | 09:37:05.330 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-09 09:37:05.330951 | orchestrator | 09:37:05.330 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.330991 | orchestrator | 09:37:05.330 STDOUT terraform:  + security_group_ids = (known after apply) 2025-10-09 09:37:05.331050 | orchestrator | 09:37:05.330 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.331075 | orchestrator | 09:37:05.331 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:05.331107 | orchestrator | 09:37:05.331 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-10-09 09:37:05.331115 | orchestrator | 09:37:05.331 STDOUT terraform:  } 2025-10-09 09:37:05.331142 | orchestrator | 09:37:05.331 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:05.331176 | orchestrator | 09:37:05.331 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-10-09 09:37:05.331195 | orchestrator | 09:37:05.331 STDOUT terraform:  } 2025-10-09 09:37:05.331217 | orchestrator | 09:37:05.331 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:05.331251 | orchestrator | 09:37:05.331 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-10-09 09:37:05.331258 | orchestrator | 09:37:05.331 STDOUT terraform:  } 2025-10-09 09:37:05.331288 | orchestrator | 09:37:05.331 STDOUT terraform:  + binding (known after apply) 2025-10-09 09:37:05.331308 | orchestrator | 09:37:05.331 STDOUT terraform:  + fixed_ip { 2025-10-09 09:37:05.331335 | orchestrator | 09:37:05.331 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-10-09 09:37:05.331370 | orchestrator | 09:37:05.331 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-09 09:37:05.331377 | orchestrator | 09:37:05.331 STDOUT terraform:  } 2025-10-09 09:37:05.331399 | orchestrator | 09:37:05.331 STDOUT terraform:  } 2025-10-09 09:37:05.331449 | orchestrator | 09:37:05.331 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-10-09 09:37:05.331501 | orchestrator | 09:37:05.331 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-10-09 09:37:05.331538 | orchestrator | 09:37:05.331 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-09 09:37:05.331578 | orchestrator | 09:37:05.331 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-10-09 09:37:05.331615 | orchestrator | 09:37:05.331 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-10-09 09:37:05.331657 | orchestrator | 09:37:05.331 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:05.331699 | orchestrator | 09:37:05.331 STDOUT terraform:  + device_id = (known after apply) 2025-10-09 09:37:05.331736 | orchestrator | 09:37:05.331 STDOUT terraform:  + device_owner = (known after apply) 2025-10-09 09:37:05.331773 | orchestrator | 09:37:05.331 STDOUT terraform:  + dns_assignment = (known after apply) 2025-10-09 09:37:05.331813 | orchestrator | 09:37:05.331 STDOUT terraform:  + dns_name = (known after apply) 2025-10-09 09:37:05.331854 | orchestrator | 09:37:05.331 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.331894 | orchestrator | 09:37:05.331 STDOUT terraform:  + mac_address = (known after apply) 2025-10-09 09:37:05.331936 | orchestrator | 09:37:05.331 STDOUT terraform:  + network_id = (known after apply) 2025-10-09 09:37:05.331976 | orchestrator | 09:37:05.331 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-09 09:37:05.332037 | orchestrator | 09:37:05.331 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-09 09:37:05.332060 | orchestrator | 09:37:05.332 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.332100 | orchestrator | 09:37:05.332 STDOUT terraform:  + security_group_ids = (known after apply) 2025-10-09 09:37:05.332138 | orchestrator | 09:37:05.332 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.332164 | orchestrator | 09:37:05.332 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:05.332195 | orchestrator | 09:37:05.332 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-10-09 09:37:05.332204 | orchestrator | 09:37:05.332 STDOUT terraform:  } 2025-10-09 09:37:05.332233 | orchestrator | 09:37:05.332 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:05.332270 | orchestrator | 09:37:05.332 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-10-09 09:37:05.332277 | orchestrator | 09:37:05.332 STDOUT terraform:  } 2025-10-09 09:37:05.332305 | orchestrator | 09:37:05.332 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:05.332336 | orchestrator | 09:37:05.332 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-10-09 09:37:05.332343 | orchestrator | 09:37:05.332 STDOUT terraform:  } 2025-10-09 09:37:05.332375 | orchestrator | 09:37:05.332 STDOUT terraform:  + binding (known after apply) 2025-10-09 09:37:05.332382 | orchestrator | 09:37:05.332 STDOUT terraform:  + fixed_ip { 2025-10-09 09:37:05.332416 | orchestrator | 09:37:05.332 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-10-09 09:37:05.332447 | orchestrator | 09:37:05.332 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-09 09:37:05.332466 | orchestrator | 09:37:05.332 STDOUT terraform:  } 2025-10-09 09:37:05.332484 | orchestrator | 09:37:05.332 STDOUT terraform:  } 2025-10-09 09:37:05.332535 | orchestrator | 09:37:05.332 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-10-09 09:37:05.332580 | orchestrator | 09:37:05.332 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-10-09 09:37:05.332621 | orchestrator | 09:37:05.332 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-09 09:37:05.332661 | orchestrator | 09:37:05.332 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-10-09 09:37:05.332700 | orchestrator | 09:37:05.332 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-10-09 09:37:05.332739 | orchestrator | 09:37:05.332 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:05.332779 | orchestrator | 09:37:05.332 STDOUT terraform:  + device_id = (known after apply) 2025-10-09 09:37:05.332816 | orchestrator | 09:37:05.332 STDOUT terraform:  + device_owner = (known after apply) 2025-10-09 09:37:05.332855 | orchestrator | 09:37:05.332 STDOUT terraform:  + dns_assignment = (known after apply) 2025-10-09 09:37:05.332896 | orchestrator | 09:37:05.332 STDOUT terraform:  + dns_name = (known after apply) 2025-10-09 09:37:05.332934 | orchestrator | 09:37:05.332 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.332975 | orchestrator | 09:37:05.332 STDOUT terraform:  + mac_address = (known after apply) 2025-10-09 09:37:05.333025 | orchestrator | 09:37:05.332 STDOUT terraform:  + network_id = (known after apply) 2025-10-09 09:37:05.333068 | orchestrator | 09:37:05.333 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-09 09:37:05.333105 | orchestrator | 09:37:05.333 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-09 09:37:05.333146 | orchestrator | 09:37:05.333 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.333183 | orchestrator | 09:37:05.333 STDOUT terraform:  + security_group_ids = (known after apply) 2025-10-09 09:37:05.333226 | orchestrator | 09:37:05.333 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.333249 | orchestrator | 09:37:05.333 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:05.333282 | orchestrator | 09:37:05.333 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-10-09 09:37:05.333291 | orchestrator | 09:37:05.333 STDOUT terraform:  } 2025-10-09 09:37:05.333320 | orchestrator | 09:37:05.333 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:05.333350 | orchestrator | 09:37:05.333 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-10-09 09:37:05.333370 | orchestrator | 09:37:05.333 STDOUT terraform:  } 2025-10-09 09:37:05.333392 | orchestrator | 09:37:05.333 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:05.333423 | orchestrator | 09:37:05.333 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-10-09 09:37:05.333430 | orchestrator | 09:37:05.333 STDOUT terraform:  } 2025-10-09 09:37:05.333462 | orchestrator | 09:37:05.333 STDOUT terraform:  + binding (known after apply) 2025-10-09 09:37:05.333469 | orchestrator | 09:37:05.333 STDOUT terraform:  + fixed_ip { 2025-10-09 09:37:05.333501 | orchestrator | 09:37:05.333 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-10-09 09:37:05.333535 | orchestrator | 09:37:05.333 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-09 09:37:05.333542 | orchestrator | 09:37:05.333 STDOUT terraform:  } 2025-10-09 09:37:05.333566 | orchestrator | 09:37:05.333 STDOUT terraform:  } 2025-10-09 09:37:05.333612 | orchestrator | 09:37:05.333 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-10-09 09:37:05.333662 | orchestrator | 09:37:05.333 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-10-09 09:37:05.333701 | orchestrator | 09:37:05.333 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-09 09:37:05.333742 | orchestrator | 09:37:05.333 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-10-09 09:37:05.333780 | orchestrator | 09:37:05.333 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-10-09 09:37:05.333817 | orchestrator | 09:37:05.333 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:05.333858 | orchestrator | 09:37:05.333 STDOUT terraform:  + device_id = (known after apply) 2025-10-09 09:37:05.333896 | orchestrator | 09:37:05.333 STDOUT terraform:  + device_owner = (known after apply) 2025-10-09 09:37:05.333938 | orchestrator | 09:37:05.333 STDOUT terraform:  + dns_assignment = (known after apply) 2025-10-09 09:37:05.333975 | orchestrator | 09:37:05.333 STDOUT terraform:  + dns_name = (known after apply) 2025-10-09 09:37:05.334075 | orchestrator | 09:37:05.333 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.334087 | orchestrator | 09:37:05.334 STDOUT terraform:  + mac_address = (known after apply) 2025-10-09 09:37:05.334106 | orchestrator | 09:37:05.334 STDOUT terraform:  + network_id = (known after apply) 2025-10-09 09:37:05.334146 | orchestrator | 09:37:05.334 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-09 09:37:05.334186 | orchestrator | 09:37:05.334 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-09 09:37:05.334224 | orchestrator | 09:37:05.334 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.334263 | orchestrator | 09:37:05.334 STDOUT terraform:  + security_group_ids = (known after apply) 2025-10-09 09:37:05.334301 | orchestrator | 09:37:05.334 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.334326 | orchestrator | 09:37:05.334 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:05.334359 | orchestrator | 09:37:05.334 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-10-09 09:37:05.334367 | orchestrator | 09:37:05.334 STDOUT terraform:  } 2025-10-09 09:37:05.334394 | orchestrator | 09:37:05.334 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:05.334427 | orchestrator | 09:37:05.334 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-10-09 09:37:05.334434 | orchestrator | 09:37:05.334 STDOUT terraform:  } 2025-10-09 09:37:05.334460 | orchestrator | 09:37:05.334 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:05.334493 | orchestrator | 09:37:05.334 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-10-09 09:37:05.334500 | orchestrator | 09:37:05.334 STDOUT terraform:  } 2025-10-09 09:37:05.334533 | orchestrator | 09:37:05.334 STDOUT terraform:  + binding (known after apply) 2025-10-09 09:37:05.334541 | orchestrator | 09:37:05.334 STDOUT terraform:  + fixed_ip { 2025-10-09 09:37:05.334574 | orchestrator | 09:37:05.334 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-10-09 09:37:05.334606 | orchestrator | 09:37:05.334 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-09 09:37:05.334624 | orchestrator | 09:37:05.334 STDOUT terraform:  } 2025-10-09 09:37:05.334631 | orchestrator | 09:37:05.334 STDOUT terraform:  } 2025-10-09 09:37:05.334685 | orchestrator | 09:37:05.334 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-10-09 09:37:05.334735 | orchestrator | 09:37:05.334 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-10-09 09:37:05.334773 | orchestrator | 09:37:05.334 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-09 09:37:05.334813 | orchestrator | 09:37:05.334 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-10-09 09:37:05.334854 | orchestrator | 09:37:05.334 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-10-09 09:37:05.334892 | orchestrator | 09:37:05.334 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:05.334933 | orchestrator | 09:37:05.334 STDOUT terraform:  + device_id = (known after apply) 2025-10-09 09:37:05.334971 | orchestrator | 09:37:05.334 STDOUT terraform:  + device_owner = (known after apply) 2025-10-09 09:37:05.335022 | orchestrator | 09:37:05.334 STDOUT terraform:  + dns_assignment = (known after apply) 2025-10-09 09:37:05.335063 | orchestrator | 09:37:05.335 STDOUT terraform:  + dns_name = (known after apply) 2025-10-09 09:37:05.335110 | orchestrator | 09:37:05.335 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.335142 | orchestrator | 09:37:05.335 STDOUT terraform:  + mac_address = (known after apply) 2025-10-09 09:37:05.335183 | orchestrator | 09:37:05.335 STDOUT terraform:  + network_id = (known after apply) 2025-10-09 09:37:05.335221 | orchestrator | 09:37:05.335 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-09 09:37:05.335258 | orchestrator | 09:37:05.335 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-09 09:37:05.335298 | orchestrator | 09:37:05.335 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.335335 | orchestrator | 09:37:05.335 STDOUT terraform:  + security_group_ids = (known after apply) 2025-10-09 09:37:05.335376 | orchestrator | 09:37:05.335 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.335398 | orchestrator | 09:37:05.335 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:05.335434 | orchestrator | 09:37:05.335 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-10-09 09:37:05.335441 | orchestrator | 09:37:05.335 STDOUT terraform:  } 2025-10-09 09:37:05.335469 | orchestrator | 09:37:05.335 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:05.335498 | orchestrator | 09:37:05.335 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-10-09 09:37:05.335516 | orchestrator | 09:37:05.335 STDOUT terraform:  } 2025-10-09 09:37:05.335538 | orchestrator | 09:37:05.335 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:05.335568 | orchestrator | 09:37:05.335 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-10-09 09:37:05.335587 | orchestrator | 09:37:05.335 STDOUT terraform:  } 2025-10-09 09:37:05.335613 | orchestrator | 09:37:05.335 STDOUT terraform:  + binding (known after apply) 2025-10-09 09:37:05.335634 | orchestrator | 09:37:05.335 STDOUT terraform:  + fixed_ip { 2025-10-09 09:37:05.335659 | orchestrator | 09:37:05.335 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-10-09 09:37:05.335693 | orchestrator | 09:37:05.335 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-09 09:37:05.335700 | orchestrator | 09:37:05.335 STDOUT terraform:  } 2025-10-09 09:37:05.335722 | orchestrator | 09:37:05.335 STDOUT terraform:  } 2025-10-09 09:37:05.335769 | orchestrator | 09:37:05.335 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-10-09 09:37:05.335819 | orchestrator | 09:37:05.335 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-10-09 09:37:05.335856 | orchestrator | 09:37:05.335 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-09 09:37:05.335900 | orchestrator | 09:37:05.335 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-10-09 09:37:05.335938 | orchestrator | 09:37:05.335 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-10-09 09:37:05.335981 | orchestrator | 09:37:05.335 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:05.336037 | orchestrator | 09:37:05.335 STDOUT terraform:  + device_id = (known after apply) 2025-10-09 09:37:05.336075 | orchestrator | 09:37:05.336 STDOUT terraform:  + device_owner = (known after apply) 2025-10-09 09:37:05.336116 | orchestrator | 09:37:05.336 STDOUT terraform:  + dns_assignment = (known after apply) 2025-10-09 09:37:05.336154 | orchestrator | 09:37:05.336 STDOUT terraform:  + dns_name = (known after apply) 2025-10-09 09:37:05.336193 | orchestrator | 09:37:05.336 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.336231 | orchestrator | 09:37:05.336 STDOUT terraform:  + mac_address = (known after apply) 2025-10-09 09:37:05.336272 | orchestrator | 09:37:05.336 STDOUT terraform:  + network_id = (known after apply) 2025-10-09 09:37:05.336309 | orchestrator | 09:37:05.336 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-10-09 09:37:05.336350 | orchestrator | 09:37:05.336 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-10-09 09:37:05.336386 | orchestrator | 09:37:05.336 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.336427 | orchestrator | 09:37:05.336 STDOUT terraform:  + security_group_ids = (known after apply) 2025-10-09 09:37:05.336466 | orchestrator | 09:37:05.336 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.336493 | orchestrator | 09:37:05.336 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:05.336523 | orchestrator | 09:37:05.336 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-10-09 09:37:05.336543 | orchestrator | 09:37:05.336 STDOUT terraform:  } 2025-10-09 09:37:05.336565 | orchestrator | 09:37:05.336 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:05.336599 | orchestrator | 09:37:05.336 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-10-09 09:37:05.336606 | orchestrator | 09:37:05.336 STDOUT terraform:  } 2025-10-09 09:37:05.336632 | orchestrator | 09:37:05.336 STDOUT terraform:  + allowed_address_pairs { 2025-10-09 09:37:05.336664 | orchestrator | 09:37:05.336 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-10-09 09:37:05.336686 | orchestrator | 09:37:05.336 STDOUT terraform:  } 2025-10-09 09:37:05.336713 | orchestrator | 09:37:05.336 STDOUT terraform:  + binding (known after apply) 2025-10-09 09:37:05.336722 | orchestrator | 09:37:05.336 STDOUT terraform:  + fixed_ip { 2025-10-09 09:37:05.336756 | orchestrator | 09:37:05.336 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-10-09 09:37:05.336787 | orchestrator | 09:37:05.336 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-09 09:37:05.336806 | orchestrator | 09:37:05.336 STDOUT terraform:  } 2025-10-09 09:37:05.336813 | orchestrator | 09:37:05.336 STDOUT terraform:  } 2025-10-09 09:37:05.336873 | orchestrator | 09:37:05.336 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-10-09 09:37:05.336922 | orchestrator | 09:37:05.336 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-10-09 09:37:05.336943 | orchestrator | 09:37:05.336 STDOUT terraform:  + force_destroy = fal 2025-10-09 09:37:05.337049 | orchestrator | 09:37:05.336 STDOUT terraform: se 2025-10-09 09:37:05.337059 | orchestrator | 09:37:05.337 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.337091 | orchestrator | 09:37:05.337 STDOUT terraform:  + port_id = (known after apply) 2025-10-09 09:37:05.337128 | orchestrator | 09:37:05.337 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.337158 | orchestrator | 09:37:05.337 STDOUT terraform:  + router_id = (known after apply) 2025-10-09 09:37:05.337189 | orchestrator | 09:37:05.337 STDOUT terraform:  + subnet_id = (known after apply) 2025-10-09 09:37:05.337208 | orchestrator | 09:37:05.337 STDOUT terraform:  } 2025-10-09 09:37:05.342267 | orchestrator | 09:37:05.337 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-10-09 09:37:05.344217 | orchestrator | 09:37:05.342 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-10-09 09:37:05.344235 | orchestrator | 09:37:05.342 STDOUT terraform:  + admin_state_up = (known after apply) 2025-10-09 09:37:05.344240 | orchestrator | 09:37:05.342 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:05.344244 | orchestrator | 09:37:05.342 STDOUT terraform:  + availability_zone_hints = [ 2025-10-09 09:37:05.344249 | orchestrator | 09:37:05.342 STDOUT terraform:  + "nova", 2025-10-09 09:37:05.344253 | orchestrator | 09:37:05.342 STDOUT terraform:  ] 2025-10-09 09:37:05.344257 | orchestrator | 09:37:05.342 STDOUT terraform:  + distributed = (known after apply) 2025-10-09 09:37:05.344262 | orchestrator | 09:37:05.342 STDOUT terraform:  + enable_snat = (known after apply) 2025-10-09 09:37:05.344277 | orchestrator | 09:37:05.342 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-10-09 09:37:05.344282 | orchestrator | 09:37:05.342 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-10-09 09:37:05.344330 | orchestrator | 09:37:05.342 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.344335 | orchestrator | 09:37:05.342 STDOUT terraform:  + name = "testbed" 2025-10-09 09:37:05.344342 | orchestrator | 09:37:05.342 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.344346 | orchestrator | 09:37:05.342 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.344351 | orchestrator | 09:37:05.342 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-10-09 09:37:05.344355 | orchestrator | 09:37:05.343 STDOUT terraform:  } 2025-10-09 09:37:05.344359 | orchestrator | 09:37:05.343 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-10-09 09:37:05.344364 | orchestrator | 09:37:05.343 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-10-09 09:37:05.344378 | orchestrator | 09:37:05.343 STDOUT terraform:  + description = "ssh" 2025-10-09 09:37:05.344382 | orchestrator | 09:37:05.343 STDOUT terraform:  + direction = "ingress" 2025-10-09 09:37:05.344386 | orchestrator | 09:37:05.343 STDOUT terraform:  + ethertype = "IPv4" 2025-10-09 09:37:05.344390 | orchestrator | 09:37:05.343 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.344402 | orchestrator | 09:37:05.343 STDOUT terraform:  + port_range_max = 22 2025-10-09 09:37:05.344406 | orchestrator | 09:37:05.343 STDOUT terraform:  + port_range_min = 22 2025-10-09 09:37:05.344410 | orchestrator | 09:37:05.343 STDOUT terraform:  + protocol = "tcp" 2025-10-09 09:37:05.344415 | orchestrator | 09:37:05.343 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.344419 | orchestrator | 09:37:05.343 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-09 09:37:05.344423 | orchestrator | 09:37:05.343 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-09 09:37:05.344427 | orchestrator | 09:37:05.343 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-10-09 09:37:05.344431 | orchestrator | 09:37:05.343 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-09 09:37:05.344438 | orchestrator | 09:37:05.343 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.344443 | orchestrator | 09:37:05.343 STDOUT terraform:  } 2025-10-09 09:37:05.344447 | orchestrator | 09:37:05.343 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-10-09 09:37:05.344451 | orchestrator | 09:37:05.343 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-10-09 09:37:05.344455 | orchestrator | 09:37:05.343 STDOUT terraform:  + description = "wireguard" 2025-10-09 09:37:05.344459 | orchestrator | 09:37:05.343 STDOUT terraform:  + direction = "ingress" 2025-10-09 09:37:05.344472 | orchestrator | 09:37:05.343 STDOUT terraform:  + ethertype = "IPv4" 2025-10-09 09:37:05.344476 | orchestrator | 09:37:05.343 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.344481 | orchestrator | 09:37:05.343 STDOUT terraform:  + port_range_max = 51820 2025-10-09 09:37:05.344485 | orchestrator | 09:37:05.343 STDOUT terraform:  + port_range_min = 51820 2025-10-09 09:37:05.344489 | orchestrator | 09:37:05.343 STDOUT terraform:  + protocol = "udp" 2025-10-09 09:37:05.344493 | orchestrator | 09:37:05.343 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.344497 | orchestrator | 09:37:05.343 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-09 09:37:05.344501 | orchestrator | 09:37:05.343 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-09 09:37:05.344505 | orchestrator | 09:37:05.343 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-10-09 09:37:05.344509 | orchestrator | 09:37:05.343 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-09 09:37:05.344513 | orchestrator | 09:37:05.343 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.344518 | orchestrator | 09:37:05.343 STDOUT terraform:  } 2025-10-09 09:37:05.344522 | orchestrator | 09:37:05.343 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-10-09 09:37:05.344526 | orchestrator | 09:37:05.344 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-10-09 09:37:05.344533 | orchestrator | 09:37:05.344 STDOUT terraform:  + direction = "ingress" 2025-10-09 09:37:05.344537 | orchestrator | 09:37:05.344 STDOUT terraform:  + ethertype = "IPv4" 2025-10-09 09:37:05.348704 | orchestrator | 09:37:05.344 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.348724 | orchestrator | 09:37:05.344 STDOUT terraform:  + protocol = "tcp" 2025-10-09 09:37:05.348728 | orchestrator | 09:37:05.344 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.348744 | orchestrator | 09:37:05.344 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-09 09:37:05.348749 | orchestrator | 09:37:05.344 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-09 09:37:05.348753 | orchestrator | 09:37:05.344 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-10-09 09:37:05.348757 | orchestrator | 09:37:05.344 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-09 09:37:05.348761 | orchestrator | 09:37:05.344 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.348765 | orchestrator | 09:37:05.344 STDOUT terraform:  } 2025-10-09 09:37:05.348770 | orchestrator | 09:37:05.344 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-10-09 09:37:05.348774 | orchestrator | 09:37:05.344 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-10-09 09:37:05.348778 | orchestrator | 09:37:05.344 STDOUT terraform:  + direction = "ingress" 2025-10-09 09:37:05.348783 | orchestrator | 09:37:05.344 STDOUT terraform:  + ethertype = "IPv4" 2025-10-09 09:37:05.348787 | orchestrator | 09:37:05.344 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.348791 | orchestrator | 09:37:05.344 STDOUT terraform:  + protocol = "udp" 2025-10-09 09:37:05.348795 | orchestrator | 09:37:05.345 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.348799 | orchestrator | 09:37:05.345 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-09 09:37:05.348803 | orchestrator | 09:37:05.345 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-09 09:37:05.348808 | orchestrator | 09:37:05.345 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-10-09 09:37:05.348812 | orchestrator | 09:37:05.345 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-09 09:37:05.348818 | orchestrator | 09:37:05.345 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.348822 | orchestrator | 09:37:05.345 STDOUT terraform:  } 2025-10-09 09:37:05.348826 | orchestrator | 09:37:05.345 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-10-09 09:37:05.348831 | orchestrator | 09:37:05.345 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-10-09 09:37:05.348835 | orchestrator | 09:37:05.345 STDOUT terraform:  + direction = "ingress" 2025-10-09 09:37:05.348839 | orchestrator | 09:37:05.345 STDOUT terraform:  + ethertype = "IPv4" 2025-10-09 09:37:05.348843 | orchestrator | 09:37:05.345 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.348853 | orchestrator | 09:37:05.345 STDOUT terraform:  + protocol = "icmp" 2025-10-09 09:37:05.348858 | orchestrator | 09:37:05.345 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.348862 | orchestrator | 09:37:05.345 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-09 09:37:05.348866 | orchestrator | 09:37:05.345 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-09 09:37:05.348870 | orchestrator | 09:37:05.345 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-10-09 09:37:05.348874 | orchestrator | 09:37:05.345 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-09 09:37:05.348878 | orchestrator | 09:37:05.345 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.348882 | orchestrator | 09:37:05.345 STDOUT terraform:  } 2025-10-09 09:37:05.348892 | orchestrator | 09:37:05.345 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-10-09 09:37:05.348896 | orchestrator | 09:37:05.345 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-10-09 09:37:05.348900 | orchestrator | 09:37:05.345 STDOUT terraform:  + direction = "ingress" 2025-10-09 09:37:05.348904 | orchestrator | 09:37:05.345 STDOUT terraform:  + ethertype = "IPv4" 2025-10-09 09:37:05.348908 | orchestrator | 09:37:05.345 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.348913 | orchestrator | 09:37:05.345 STDOUT terraform:  + protocol = "tcp" 2025-10-09 09:37:05.348917 | orchestrator | 09:37:05.345 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.348921 | orchestrator | 09:37:05.345 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-09 09:37:05.348925 | orchestrator | 09:37:05.345 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-09 09:37:05.348929 | orchestrator | 09:37:05.345 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-10-09 09:37:05.348933 | orchestrator | 09:37:05.345 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-09 09:37:05.348937 | orchestrator | 09:37:05.345 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.348941 | orchestrator | 09:37:05.346 STDOUT terraform:  } 2025-10-09 09:37:05.348945 | orchestrator | 09:37:05.346 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-10-09 09:37:05.348950 | orchestrator | 09:37:05.346 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-10-09 09:37:05.348954 | orchestrator | 09:37:05.346 STDOUT terraform:  + direction = "ingress" 2025-10-09 09:37:05.348958 | orchestrator | 09:37:05.346 STDOUT terraform:  + ethertype = "IPv4" 2025-10-09 09:37:05.348962 | orchestrator | 09:37:05.346 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.348966 | orchestrator | 09:37:05.346 STDOUT terraform:  + protocol = "udp" 2025-10-09 09:37:05.348970 | orchestrator | 09:37:05.346 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.348981 | orchestrator | 09:37:05.346 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-09 09:37:05.348985 | orchestrator | 09:37:05.346 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-09 09:37:05.348989 | orchestrator | 09:37:05.346 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-10-09 09:37:05.348993 | orchestrator | 09:37:05.346 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-09 09:37:05.348998 | orchestrator | 09:37:05.346 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.349002 | orchestrator | 09:37:05.346 STDOUT terraform:  } 2025-10-09 09:37:05.349018 | orchestrator | 09:37:05.346 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-10-09 09:37:05.349022 | orchestrator | 09:37:05.346 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-10-09 09:37:05.349026 | orchestrator | 09:37:05.346 STDOUT terraform:  + direction = "ingress" 2025-10-09 09:37:05.349030 | orchestrator | 09:37:05.346 STDOUT terraform:  + ethertype = "IPv4" 2025-10-09 09:37:05.349034 | orchestrator | 09:37:05.346 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.349038 | orchestrator | 09:37:05.346 STDOUT terraform:  + protocol = "icmp" 2025-10-09 09:37:05.349042 | orchestrator | 09:37:05.346 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.349046 | orchestrator | 09:37:05.346 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-09 09:37:05.349050 | orchestrator | 09:37:05.346 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-09 09:37:05.349057 | orchestrator | 09:37:05.346 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-10-09 09:37:05.349061 | orchestrator | 09:37:05.346 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-09 09:37:05.349065 | orchestrator | 09:37:05.346 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.349069 | orchestrator | 09:37:05.346 STDOUT terraform:  } 2025-10-09 09:37:05.349073 | orchestrator | 09:37:05.346 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-10-09 09:37:05.349077 | orchestrator | 09:37:05.346 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-10-09 09:37:05.349082 | orchestrator | 09:37:05.346 STDOUT terraform:  + description = "vrrp" 2025-10-09 09:37:05.349086 | orchestrator | 09:37:05.346 STDOUT terraform:  + direction = "ingress" 2025-10-09 09:37:05.349090 | orchestrator | 09:37:05.346 STDOUT terraform:  + ethertype = "IPv4" 2025-10-09 09:37:05.349094 | orchestrator | 09:37:05.346 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.349098 | orchestrator | 09:37:05.347 STDOUT terraform:  + protocol = "112" 2025-10-09 09:37:05.349102 | orchestrator | 09:37:05.347 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.349106 | orchestrator | 09:37:05.347 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-10-09 09:37:05.349114 | orchestrator | 09:37:05.347 STDOUT terraform:  + remote_group_id = (known after apply) 2025-10-09 09:37:05.349118 | orchestrator | 09:37:05.347 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-10-09 09:37:05.349122 | orchestrator | 09:37:05.347 STDOUT terraform:  + security_group_id = (known after apply) 2025-10-09 09:37:05.349126 | orchestrator | 09:37:05.347 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.349130 | orchestrator | 09:37:05.347 STDOUT terraform:  } 2025-10-09 09:37:05.349134 | orchestrator | 09:37:05.347 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-10-09 09:37:05.349139 | orchestrator | 09:37:05.347 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-10-09 09:37:05.349143 | orchestrator | 09:37:05.347 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:05.349147 | orchestrator | 09:37:05.347 STDOUT terraform:  + description = "management security group" 2025-10-09 09:37:05.349151 | orchestrator | 09:37:05.347 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.349155 | orchestrator | 09:37:05.347 STDOUT terraform:  + name = "testbed-management" 2025-10-09 09:37:05.349159 | orchestrator | 09:37:05.347 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.349163 | orchestrator | 09:37:05.347 STDOUT terraform:  + stateful = (known after apply) 2025-10-09 09:37:05.349167 | orchestrator | 09:37:05.347 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.349171 | orchestrator | 09:37:05.347 STDOUT terraform:  } 2025-10-09 09:37:05.349175 | orchestrator | 09:37:05.347 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-10-09 09:37:05.349179 | orchestrator | 09:37:05.347 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-10-09 09:37:05.349184 | orchestrator | 09:37:05.347 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:05.349188 | orchestrator | 09:37:05.347 STDOUT terraform:  + description = "node security group" 2025-10-09 09:37:05.349192 | orchestrator | 09:37:05.347 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.349196 | orchestrator | 09:37:05.347 STDOUT terraform:  + name = "testbed-node" 2025-10-09 09:37:05.349200 | orchestrator | 09:37:05.347 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.349204 | orchestrator | 09:37:05.347 STDOUT terraform:  + stateful = (known after apply) 2025-10-09 09:37:05.349208 | orchestrator | 09:37:05.347 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.349214 | orchestrator | 09:37:05.347 STDOUT terraform:  } 2025-10-09 09:37:05.349219 | orchestrator | 09:37:05.347 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-10-09 09:37:05.349223 | orchestrator | 09:37:05.347 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-10-09 09:37:05.349227 | orchestrator | 09:37:05.347 STDOUT terraform:  + all_tags = (known after apply) 2025-10-09 09:37:05.349231 | orchestrator | 09:37:05.347 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-10-09 09:37:05.349235 | orchestrator | 09:37:05.347 STDOUT terraform:  + dns_nameservers = [ 2025-10-09 09:37:05.349243 | orchestrator | 09:37:05.347 STDOUT terraform:  + "8.8.8.8", 2025-10-09 09:37:05.349247 | orchestrator | 09:37:05.347 STDOUT terraform:  + "9.9.9.9", 2025-10-09 09:37:05.349251 | orchestrator | 09:37:05.347 STDOUT terraform:  ] 2025-10-09 09:37:05.349255 | orchestrator | 09:37:05.347 STDOUT terraform:  + enable_dhcp = true 2025-10-09 09:37:05.349259 | orchestrator | 09:37:05.347 STDOUT terraform:  + gateway_ip = (known after apply) 2025-10-09 09:37:05.349263 | orchestrator | 09:37:05.348 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.349267 | orchestrator | 09:37:05.348 STDOUT terraform:  + ip_version = 4 2025-10-09 09:37:05.349271 | orchestrator | 09:37:05.348 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-10-09 09:37:05.349275 | orchestrator | 09:37:05.348 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-10-09 09:37:05.349279 | orchestrator | 09:37:05.348 STDOUT terraform:  + name = "subnet-testbed-management" 2025-10-09 09:37:05.349283 | orchestrator | 09:37:05.348 STDOUT terraform:  + network_id = (known after apply) 2025-10-09 09:37:05.349287 | orchestrator | 09:37:05.348 STDOUT terraform:  + no_gateway = false 2025-10-09 09:37:05.349291 | orchestrator | 09:37:05.348 STDOUT terraform:  + region = (known after apply) 2025-10-09 09:37:05.349314 | orchestrator | 09:37:05.348 STDOUT terraform:  + service_types = (known after apply) 2025-10-09 09:37:05.349318 | orchestrator | 09:37:05.348 STDOUT terraform:  + tenant_id = (known after apply) 2025-10-09 09:37:05.349323 | orchestrator | 09:37:05.348 STDOUT terraform:  + allocation_pool { 2025-10-09 09:37:05.349329 | orchestrator | 09:37:05.348 STDOUT terraform:  + end = "192.168.31.250" 2025-10-09 09:37:05.349333 | orchestrator | 09:37:05.348 STDOUT terraform:  + start = "192.168.31.200" 2025-10-09 09:37:05.349337 | orchestrator | 09:37:05.348 STDOUT terraform:  } 2025-10-09 09:37:05.349341 | orchestrator | 09:37:05.348 STDOUT terraform:  } 2025-10-09 09:37:05.349345 | orchestrator | 09:37:05.348 STDOUT terraform:  # terraform_data.image will be created 2025-10-09 09:37:05.349349 | orchestrator | 09:37:05.348 STDOUT terraform:  + resource "terraform_data" "image" { 2025-10-09 09:37:05.349354 | orchestrator | 09:37:05.348 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.349358 | orchestrator | 09:37:05.348 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-10-09 09:37:05.349362 | orchestrator | 09:37:05.348 STDOUT terraform:  + output = (known after apply) 2025-10-09 09:37:05.349366 | orchestrator | 09:37:05.348 STDOUT terraform:  } 2025-10-09 09:37:05.349370 | orchestrator | 09:37:05.348 STDOUT terraform:  # terraform_data.image_node will be created 2025-10-09 09:37:05.349374 | orchestrator | 09:37:05.348 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-10-09 09:37:05.349378 | orchestrator | 09:37:05.348 STDOUT terraform:  + id = (known after apply) 2025-10-09 09:37:05.349382 | orchestrator | 09:37:05.348 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-10-09 09:37:05.349386 | orchestrator | 09:37:05.348 STDOUT terraform:  + output = (known after apply) 2025-10-09 09:37:05.349390 | orchestrator | 09:37:05.348 STDOUT terraform:  } 2025-10-09 09:37:05.349397 | orchestrator | 09:37:05.348 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-10-09 09:37:05.349402 | orchestrator | 09:37:05.348 STDOUT terraform: Changes to Outputs: 2025-10-09 09:37:05.349406 | orchestrator | 09:37:05.348 STDOUT terraform:  + manager_address = (sensitive value) 2025-10-09 09:37:05.349412 | orchestrator | 09:37:05.348 STDOUT terraform:  + private_key = (sensitive value) 2025-10-09 09:37:05.435419 | orchestrator | 09:37:05.435 STDOUT terraform: terraform_data.image_node: Creating... 2025-10-09 09:37:05.486244 | orchestrator | 09:37:05.486 STDOUT terraform: terraform_data.image: Creating... 2025-10-09 09:37:05.486304 | orchestrator | 09:37:05.486 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=1b62c8f2-5150-d402-e786-77fa2b9c5b24] 2025-10-09 09:37:05.486437 | orchestrator | 09:37:05.486 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=bcea4321-0153-358f-204b-bfe342f4bef2] 2025-10-09 09:37:05.496727 | orchestrator | 09:37:05.494 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-10-09 09:37:05.510619 | orchestrator | 09:37:05.510 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-10-09 09:37:05.517854 | orchestrator | 09:37:05.517 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-10-09 09:37:05.517899 | orchestrator | 09:37:05.517 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-10-09 09:37:05.518589 | orchestrator | 09:37:05.518 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-10-09 09:37:05.522558 | orchestrator | 09:37:05.522 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-10-09 09:37:05.527507 | orchestrator | 09:37:05.527 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-10-09 09:37:05.528813 | orchestrator | 09:37:05.528 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-10-09 09:37:05.529021 | orchestrator | 09:37:05.528 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-10-09 09:37:05.539801 | orchestrator | 09:37:05.539 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-10-09 09:37:05.967538 | orchestrator | 09:37:05.967 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-10-09 09:37:05.970990 | orchestrator | 09:37:05.970 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-10-09 09:37:05.996911 | orchestrator | 09:37:05.996 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-10-09 09:37:06.001636 | orchestrator | 09:37:06.001 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-10-09 09:37:06.484409 | orchestrator | 09:37:06.484 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 0s [id=33a1563b-4d56-4748-aba7-bcb1351b2eff] 2025-10-09 09:37:06.488613 | orchestrator | 09:37:06.488 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-10-09 09:37:06.535761 | orchestrator | 09:37:06.535 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-10-09 09:37:06.542838 | orchestrator | 09:37:06.542 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-10-09 09:37:09.130390 | orchestrator | 09:37:09.130 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=96a31b72-79c3-475c-a7fa-14d6a4c6c9b3] 2025-10-09 09:37:09.149999 | orchestrator | 09:37:09.149 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-10-09 09:37:09.155606 | orchestrator | 09:37:09.155 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=6ad7b454-0b43-4b47-a404-c2fa6c30a397] 2025-10-09 09:37:09.164028 | orchestrator | 09:37:09.163 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=919b2ed4-de3e-4423-bde9-ac7f73558c8d] 2025-10-09 09:37:09.170417 | orchestrator | 09:37:09.169 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-10-09 09:37:09.170473 | orchestrator | 09:37:09.169 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=243f3fa7a198894325fa9489d78b94442466af10] 2025-10-09 09:37:09.173313 | orchestrator | 09:37:09.173 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-10-09 09:37:09.184883 | orchestrator | 09:37:09.184 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=94b6a137-07a9-47a7-90bd-af13afc1319f] 2025-10-09 09:37:09.189034 | orchestrator | 09:37:09.188 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=46e0cf8b-6c4d-4615-bce2-a8b81f113425] 2025-10-09 09:37:09.190805 | orchestrator | 09:37:09.190 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-10-09 09:37:09.192997 | orchestrator | 09:37:09.192 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=8c8e0815c711d78a61cb7ca90e5fd94e6a42ec15] 2025-10-09 09:37:09.195088 | orchestrator | 09:37:09.194 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-10-09 09:37:09.195438 | orchestrator | 09:37:09.195 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=2df43997-ce38-41a3-953f-7189c0799c6e] 2025-10-09 09:37:09.197829 | orchestrator | 09:37:09.197 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-10-09 09:37:09.200363 | orchestrator | 09:37:09.200 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-10-09 09:37:09.205691 | orchestrator | 09:37:09.205 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-10-09 09:37:09.209622 | orchestrator | 09:37:09.209 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=fd778c69-d4e8-41af-bc93-131a1dca1168] 2025-10-09 09:37:09.213459 | orchestrator | 09:37:09.213 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-10-09 09:37:09.235353 | orchestrator | 09:37:09.235 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=ea7d1eca-dc5e-463e-aff8-492469dc7c84] 2025-10-09 09:37:09.263392 | orchestrator | 09:37:09.263 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=9e7febf8-8ec8-4679-b2bb-f3ad59f2c20d] 2025-10-09 09:37:09.892486 | orchestrator | 09:37:09.892 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=4cf24940-5021-4daf-9cb2-e8be662954e6] 2025-10-09 09:37:10.119242 | orchestrator | 09:37:10.118 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=6479e3e4-3ba4-43db-93b4-0a8bd2b784da] 2025-10-09 09:37:10.127531 | orchestrator | 09:37:10.127 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-10-09 09:37:12.539882 | orchestrator | 09:37:12.539 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=bb365fb9-195f-4b58-855c-59ae3371b843] 2025-10-09 09:37:12.580173 | orchestrator | 09:37:12.579 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=1155372d-89ce-41bb-8625-403a9b86a02b] 2025-10-09 09:37:12.607612 | orchestrator | 09:37:12.607 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=d64f8835-69ed-47b8-9bfe-3e1c6198249d] 2025-10-09 09:37:12.626759 | orchestrator | 09:37:12.626 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=696868ce-8dc7-4d26-89d7-c31c863807c5] 2025-10-09 09:37:12.633114 | orchestrator | 09:37:12.632 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=4e3856c6-3a0d-4403-a9bd-2ba24be42be0] 2025-10-09 09:37:12.642279 | orchestrator | 09:37:12.642 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=08a584d3-4193-4c26-9ad8-9a2035627c92] 2025-10-09 09:37:12.905399 | orchestrator | 09:37:12.905 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=4b42355f-a9ac-4011-8720-dff4c996a353] 2025-10-09 09:37:12.911867 | orchestrator | 09:37:12.911 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-10-09 09:37:12.914478 | orchestrator | 09:37:12.914 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-10-09 09:37:12.914554 | orchestrator | 09:37:12.914 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-10-09 09:37:13.127835 | orchestrator | 09:37:13.127 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=ee16d63c-9878-4c2c-8629-6be921d14ccf] 2025-10-09 09:37:13.143940 | orchestrator | 09:37:13.143 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-10-09 09:37:13.144478 | orchestrator | 09:37:13.144 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-10-09 09:37:13.144890 | orchestrator | 09:37:13.144 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-10-09 09:37:13.144907 | orchestrator | 09:37:13.144 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-10-09 09:37:13.145786 | orchestrator | 09:37:13.145 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-10-09 09:37:13.146363 | orchestrator | 09:37:13.146 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-10-09 09:37:13.173283 | orchestrator | 09:37:13.173 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=f1a095a1-009d-425a-8f30-75c3e36cb45f] 2025-10-09 09:37:13.180356 | orchestrator | 09:37:13.180 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-10-09 09:37:13.182639 | orchestrator | 09:37:13.182 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-10-09 09:37:13.187542 | orchestrator | 09:37:13.187 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-10-09 09:37:13.520973 | orchestrator | 09:37:13.520 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=5e87ac84-cac1-4fc1-85c3-2761e751b914] 2025-10-09 09:37:13.539896 | orchestrator | 09:37:13.539 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-10-09 09:37:13.909830 | orchestrator | 09:37:13.909 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=1bcd2ee3-bb98-4f67-bae5-e63a20d119d6] 2025-10-09 09:37:13.923101 | orchestrator | 09:37:13.922 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-10-09 09:37:13.933896 | orchestrator | 09:37:13.933 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=b2a29287-8407-442a-a1ba-82599ade77cb] 2025-10-09 09:37:13.946327 | orchestrator | 09:37:13.946 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-10-09 09:37:14.076130 | orchestrator | 09:37:14.075 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=5c49be7b-072d-4008-b4a6-f2c6656776f6] 2025-10-09 09:37:14.088775 | orchestrator | 09:37:14.088 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-10-09 09:37:14.253933 | orchestrator | 09:37:14.253 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=1ab2403f-e0c9-490b-8bac-6430f9a399fc] 2025-10-09 09:37:14.268497 | orchestrator | 09:37:14.268 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-10-09 09:37:14.337857 | orchestrator | 09:37:14.337 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=93b44c7a-8304-4243-9145-192ac1cf1e0c] 2025-10-09 09:37:14.343562 | orchestrator | 09:37:14.343 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-10-09 09:37:14.579149 | orchestrator | 09:37:14.578 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=5f5b1078-c593-4e2c-bc95-d893fe8c09b5] 2025-10-09 09:37:14.587191 | orchestrator | 09:37:14.586 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-10-09 09:37:14.614057 | orchestrator | 09:37:14.613 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=4c6549dd-0316-413c-8df6-216dd3dec3a7] 2025-10-09 09:37:14.636294 | orchestrator | 09:37:14.635 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=d3b6474c-b1cb-4f16-ad9b-cfe65a9b70ed] 2025-10-09 09:37:14.694725 | orchestrator | 09:37:14.694 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 2s [id=715517c4-5b2f-431d-8e1c-097d73fd2b38] 2025-10-09 09:37:14.757526 | orchestrator | 09:37:14.757 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 2s [id=9838232e-8338-481d-8451-6182f7d71d11] 2025-10-09 09:37:14.790098 | orchestrator | 09:37:14.789 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=92cff657-ae28-425c-8487-feee0ee2feec] 2025-10-09 09:37:14.814347 | orchestrator | 09:37:14.813 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=bec18f33-2d40-4454-a3f6-9574d05457b8] 2025-10-09 09:37:14.874077 | orchestrator | 09:37:14.873 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=6ac08bcf-8bf9-4f65-8006-ce3f9d94e37b] 2025-10-09 09:37:14.938789 | orchestrator | 09:37:14.938 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=b60a2301-d3c4-4a2c-86b9-78ef15cb7ddb] 2025-10-09 09:37:15.027095 | orchestrator | 09:37:15.026 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=2b871e63-1c60-4c73-8190-7cd13c34600f] 2025-10-09 09:37:15.949443 | orchestrator | 09:37:15.949 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=31be1545-bd73-47ba-b934-b8711257f1e8] 2025-10-09 09:37:15.970784 | orchestrator | 09:37:15.970 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-10-09 09:37:15.982738 | orchestrator | 09:37:15.982 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-10-09 09:37:15.990236 | orchestrator | 09:37:15.990 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-10-09 09:37:15.990948 | orchestrator | 09:37:15.990 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-10-09 09:37:15.995027 | orchestrator | 09:37:15.994 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-10-09 09:37:15.999661 | orchestrator | 09:37:15.999 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-10-09 09:37:16.000834 | orchestrator | 09:37:16.000 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-10-09 09:37:19.004205 | orchestrator | 09:37:19.003 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=9eed4dfd-ecf8-498c-9c69-fbd75b5bc757] 2025-10-09 09:37:19.021506 | orchestrator | 09:37:19.021 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-10-09 09:37:19.023677 | orchestrator | 09:37:19.023 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-10-09 09:37:19.025525 | orchestrator | 09:37:19.025 STDOUT terraform: local_file.inventory: Creating... 2025-10-09 09:37:19.033093 | orchestrator | 09:37:19.032 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=9ebdf3fcd30a4679b0ef6048bbc6a728547e1664] 2025-10-09 09:37:19.035442 | orchestrator | 09:37:19.035 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=6114ed494fcfcbb01c2156ea315ab7df87fcbc75] 2025-10-09 09:37:20.028369 | orchestrator | 09:37:20.027 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=9eed4dfd-ecf8-498c-9c69-fbd75b5bc757] 2025-10-09 09:37:25.988599 | orchestrator | 09:37:25.988 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-10-09 09:37:25.988714 | orchestrator | 09:37:25.988 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-10-09 09:37:25.996673 | orchestrator | 09:37:25.996 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-10-09 09:37:25.998859 | orchestrator | 09:37:25.998 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-10-09 09:37:26.001091 | orchestrator | 09:37:26.000 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-10-09 09:37:26.001246 | orchestrator | 09:37:26.001 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-10-09 09:37:35.991181 | orchestrator | 09:37:35.990 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-10-09 09:37:35.991377 | orchestrator | 09:37:35.991 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-10-09 09:37:35.997213 | orchestrator | 09:37:35.997 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-10-09 09:37:35.999507 | orchestrator | 09:37:35.999 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-10-09 09:37:36.001713 | orchestrator | 09:37:36.001 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-10-09 09:37:36.001894 | orchestrator | 09:37:36.001 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-10-09 09:37:36.649604 | orchestrator | 09:37:36.649 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=ad1d70fd-ed70-461e-b696-ce1b0a580662] 2025-10-09 09:37:36.849799 | orchestrator | 09:37:36.849 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=13582a23-ac32-4020-a183-8ec97a7e6ba7] 2025-10-09 09:37:45.993191 | orchestrator | 09:37:45.992 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-10-09 09:37:45.998292 | orchestrator | 09:37:45.998 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-10-09 09:37:46.000491 | orchestrator | 09:37:46.000 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-10-09 09:37:46.002827 | orchestrator | 09:37:46.002 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-10-09 09:37:47.157473 | orchestrator | 09:37:47.156 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=6d67d9dc-fd04-4e90-a847-9d44b395930a] 2025-10-09 09:37:47.261342 | orchestrator | 09:37:47.260 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=e0e3fd97-b8a0-4e47-9447-557066818b80] 2025-10-09 09:37:47.298264 | orchestrator | 09:37:47.297 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=438f6d41-9abe-4440-a139-2bd7ab8ba347] 2025-10-09 09:37:47.396921 | orchestrator | 09:37:47.396 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=8471f3f4-47a2-4879-9658-1f62d1df4e98] 2025-10-09 09:37:47.429380 | orchestrator | 09:37:47.429 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-10-09 09:37:47.431962 | orchestrator | 09:37:47.431 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-10-09 09:37:47.432035 | orchestrator | 09:37:47.431 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-10-09 09:37:47.432243 | orchestrator | 09:37:47.432 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-10-09 09:37:47.434948 | orchestrator | 09:37:47.434 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=2636364175639625951] 2025-10-09 09:37:47.441273 | orchestrator | 09:37:47.441 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-10-09 09:37:47.443367 | orchestrator | 09:37:47.443 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-10-09 09:37:47.447261 | orchestrator | 09:37:47.445 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-10-09 09:37:47.447292 | orchestrator | 09:37:47.446 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-10-09 09:37:47.447606 | orchestrator | 09:37:47.447 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-10-09 09:37:47.463777 | orchestrator | 09:37:47.463 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-10-09 09:37:47.468237 | orchestrator | 09:37:47.468 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-10-09 09:37:51.127885 | orchestrator | 09:37:51.127 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=ad1d70fd-ed70-461e-b696-ce1b0a580662/96a31b72-79c3-475c-a7fa-14d6a4c6c9b3] 2025-10-09 09:37:51.216526 | orchestrator | 09:37:51.216 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=13582a23-ac32-4020-a183-8ec97a7e6ba7/2df43997-ce38-41a3-953f-7189c0799c6e] 2025-10-09 09:37:51.223540 | orchestrator | 09:37:51.223 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=ad1d70fd-ed70-461e-b696-ce1b0a580662/fd778c69-d4e8-41af-bc93-131a1dca1168] 2025-10-09 09:37:51.241486 | orchestrator | 09:37:51.241 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=8471f3f4-47a2-4879-9658-1f62d1df4e98/94b6a137-07a9-47a7-90bd-af13afc1319f] 2025-10-09 09:37:51.252691 | orchestrator | 09:37:51.252 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=13582a23-ac32-4020-a183-8ec97a7e6ba7/ea7d1eca-dc5e-463e-aff8-492469dc7c84] 2025-10-09 09:37:51.289663 | orchestrator | 09:37:51.289 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=8471f3f4-47a2-4879-9658-1f62d1df4e98/46e0cf8b-6c4d-4615-bce2-a8b81f113425] 2025-10-09 09:37:57.356953 | orchestrator | 09:37:57.356 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=ad1d70fd-ed70-461e-b696-ce1b0a580662/9e7febf8-8ec8-4679-b2bb-f3ad59f2c20d] 2025-10-09 09:37:57.373163 | orchestrator | 09:37:57.372 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=13582a23-ac32-4020-a183-8ec97a7e6ba7/919b2ed4-de3e-4423-bde9-ac7f73558c8d] 2025-10-09 09:37:57.383726 | orchestrator | 09:37:57.383 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=8471f3f4-47a2-4879-9658-1f62d1df4e98/6ad7b454-0b43-4b47-a404-c2fa6c30a397] 2025-10-09 09:37:57.469215 | orchestrator | 09:37:57.468 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-10-09 09:38:07.469880 | orchestrator | 09:38:07.469 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-10-09 09:38:07.831826 | orchestrator | 09:38:07.831 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=1e12460a-e4ed-4659-93d9-88e4bcee0fc5] 2025-10-09 09:38:07.848124 | orchestrator | 09:38:07.847 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-10-09 09:38:07.848193 | orchestrator | 09:38:07.847 STDOUT terraform: Outputs: 2025-10-09 09:38:07.848229 | orchestrator | 09:38:07.847 STDOUT terraform: manager_address = 2025-10-09 09:38:07.848243 | orchestrator | 09:38:07.847 STDOUT terraform: private_key = 2025-10-09 09:38:08.199735 | orchestrator | ok: Runtime: 0:01:08.315877 2025-10-09 09:38:08.236287 | 2025-10-09 09:38:08.236403 | TASK [Create infrastructure (stable)] 2025-10-09 09:38:08.767850 | orchestrator | skipping: Conditional result was False 2025-10-09 09:38:08.785707 | 2025-10-09 09:38:08.785862 | TASK [Fetch manager address] 2025-10-09 09:38:09.199995 | orchestrator | ok 2025-10-09 09:38:09.210005 | 2025-10-09 09:38:09.210196 | TASK [Set manager_host address] 2025-10-09 09:38:09.291390 | orchestrator | ok 2025-10-09 09:38:09.301545 | 2025-10-09 09:38:09.301689 | LOOP [Update ansible collections] 2025-10-09 09:38:10.161115 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-10-09 09:38:10.161483 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-10-09 09:38:10.161543 | orchestrator | Starting galaxy collection install process 2025-10-09 09:38:10.161642 | orchestrator | Process install dependency map 2025-10-09 09:38:10.161679 | orchestrator | Starting collection install process 2025-10-09 09:38:10.161712 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2025-10-09 09:38:10.161750 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2025-10-09 09:38:10.161791 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-10-09 09:38:10.161864 | orchestrator | ok: Item: commons Runtime: 0:00:00.527519 2025-10-09 09:38:11.016120 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-10-09 09:38:11.016284 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-10-09 09:38:11.016334 | orchestrator | Starting galaxy collection install process 2025-10-09 09:38:11.016371 | orchestrator | Process install dependency map 2025-10-09 09:38:11.016405 | orchestrator | Starting collection install process 2025-10-09 09:38:11.016435 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2025-10-09 09:38:11.016466 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2025-10-09 09:38:11.016495 | orchestrator | osism.services:999.0.0 was installed successfully 2025-10-09 09:38:11.016547 | orchestrator | ok: Item: services Runtime: 0:00:00.601421 2025-10-09 09:38:11.033819 | 2025-10-09 09:38:11.033949 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-10-09 09:38:21.572756 | orchestrator | ok 2025-10-09 09:38:21.581760 | 2025-10-09 09:38:21.581865 | TASK [Wait a little longer for the manager so that everything is ready] 2025-10-09 09:39:21.626336 | orchestrator | ok 2025-10-09 09:39:21.636182 | 2025-10-09 09:39:21.636310 | TASK [Fetch manager ssh hostkey] 2025-10-09 09:39:23.216999 | orchestrator | Output suppressed because no_log was given 2025-10-09 09:39:23.234985 | 2025-10-09 09:39:23.235220 | TASK [Get ssh keypair from terraform environment] 2025-10-09 09:39:23.772150 | orchestrator | ok: Runtime: 0:00:00.011330 2025-10-09 09:39:23.788334 | 2025-10-09 09:39:23.788485 | TASK [Point out that the following task takes some time and does not give any output] 2025-10-09 09:39:23.836751 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-10-09 09:39:23.846740 | 2025-10-09 09:39:23.846897 | TASK [Run manager part 0] 2025-10-09 09:39:24.690068 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-10-09 09:39:24.734394 | orchestrator | 2025-10-09 09:39:24.734448 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-10-09 09:39:24.734457 | orchestrator | 2025-10-09 09:39:24.734473 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-10-09 09:39:26.631542 | orchestrator | ok: [testbed-manager] 2025-10-09 09:39:26.631598 | orchestrator | 2025-10-09 09:39:26.631622 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-10-09 09:39:26.631632 | orchestrator | 2025-10-09 09:39:26.631641 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-09 09:39:28.593797 | orchestrator | ok: [testbed-manager] 2025-10-09 09:39:28.593853 | orchestrator | 2025-10-09 09:39:28.593861 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-10-09 09:39:29.246687 | orchestrator | ok: [testbed-manager] 2025-10-09 09:39:29.246743 | orchestrator | 2025-10-09 09:39:29.246751 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-10-09 09:39:29.297968 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:39:29.297995 | orchestrator | 2025-10-09 09:39:29.298002 | orchestrator | TASK [Update package cache] **************************************************** 2025-10-09 09:39:29.328630 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:39:29.328669 | orchestrator | 2025-10-09 09:39:29.328675 | orchestrator | TASK [Install required packages] *********************************************** 2025-10-09 09:39:29.356255 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:39:29.356321 | orchestrator | 2025-10-09 09:39:29.356335 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-10-09 09:39:29.381345 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:39:29.381379 | orchestrator | 2025-10-09 09:39:29.381385 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-10-09 09:39:29.405648 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:39:29.405676 | orchestrator | 2025-10-09 09:39:29.405682 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2025-10-09 09:39:29.430196 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:39:29.430214 | orchestrator | 2025-10-09 09:39:29.430221 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-10-09 09:39:29.454163 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:39:29.454182 | orchestrator | 2025-10-09 09:39:29.454187 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-10-09 09:39:30.397599 | orchestrator | changed: [testbed-manager] 2025-10-09 09:39:30.397652 | orchestrator | 2025-10-09 09:39:30.397658 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-10-09 09:42:11.571367 | orchestrator | changed: [testbed-manager] 2025-10-09 09:42:11.571432 | orchestrator | 2025-10-09 09:42:11.571449 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-10-09 09:43:31.193475 | orchestrator | changed: [testbed-manager] 2025-10-09 09:43:31.193603 | orchestrator | 2025-10-09 09:43:31.193621 | orchestrator | TASK [Install required packages] *********************************************** 2025-10-09 09:43:53.382889 | orchestrator | changed: [testbed-manager] 2025-10-09 09:43:53.382977 | orchestrator | 2025-10-09 09:43:53.382996 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-10-09 09:44:02.849604 | orchestrator | changed: [testbed-manager] 2025-10-09 09:44:02.849697 | orchestrator | 2025-10-09 09:44:02.849715 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-10-09 09:44:02.899407 | orchestrator | ok: [testbed-manager] 2025-10-09 09:44:02.899446 | orchestrator | 2025-10-09 09:44:02.899455 | orchestrator | TASK [Get current user] ******************************************************** 2025-10-09 09:44:03.791653 | orchestrator | ok: [testbed-manager] 2025-10-09 09:44:03.791742 | orchestrator | 2025-10-09 09:44:03.791761 | orchestrator | TASK [Create venv directory] *************************************************** 2025-10-09 09:44:04.563602 | orchestrator | changed: [testbed-manager] 2025-10-09 09:44:04.563642 | orchestrator | 2025-10-09 09:44:04.563650 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-10-09 09:44:11.264501 | orchestrator | changed: [testbed-manager] 2025-10-09 09:44:11.264565 | orchestrator | 2025-10-09 09:44:11.264612 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-10-09 09:44:17.814836 | orchestrator | changed: [testbed-manager] 2025-10-09 09:44:17.814927 | orchestrator | 2025-10-09 09:44:17.814946 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-10-09 09:44:20.730122 | orchestrator | changed: [testbed-manager] 2025-10-09 09:44:20.730430 | orchestrator | 2025-10-09 09:44:20.730451 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-10-09 09:44:22.706101 | orchestrator | changed: [testbed-manager] 2025-10-09 09:44:22.706185 | orchestrator | 2025-10-09 09:44:22.706202 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-10-09 09:44:23.892854 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-10-09 09:44:23.892932 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-10-09 09:44:23.892947 | orchestrator | 2025-10-09 09:44:23.892960 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-10-09 09:44:23.933949 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-10-09 09:44:23.934000 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-10-09 09:44:23.934014 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-10-09 09:44:23.934107 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-10-09 09:44:27.572308 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-10-09 09:44:27.572346 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-10-09 09:44:27.572352 | orchestrator | 2025-10-09 09:44:27.572358 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-10-09 09:44:28.167203 | orchestrator | changed: [testbed-manager] 2025-10-09 09:44:28.167250 | orchestrator | 2025-10-09 09:44:28.167260 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-10-09 09:46:58.469927 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-10-09 09:46:58.469979 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-10-09 09:46:58.469988 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-10-09 09:46:58.469996 | orchestrator | 2025-10-09 09:46:58.470003 | orchestrator | TASK [Install local collections] *********************************************** 2025-10-09 09:47:00.877821 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-10-09 09:47:00.877856 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-10-09 09:47:00.877862 | orchestrator | 2025-10-09 09:47:00.877867 | orchestrator | PLAY [Create operator user] **************************************************** 2025-10-09 09:47:00.877872 | orchestrator | 2025-10-09 09:47:00.877876 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-09 09:47:02.295010 | orchestrator | ok: [testbed-manager] 2025-10-09 09:47:02.295102 | orchestrator | 2025-10-09 09:47:02.295111 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-10-09 09:47:02.343284 | orchestrator | ok: [testbed-manager] 2025-10-09 09:47:02.343325 | orchestrator | 2025-10-09 09:47:02.343334 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-10-09 09:47:02.404121 | orchestrator | ok: [testbed-manager] 2025-10-09 09:47:02.404194 | orchestrator | 2025-10-09 09:47:02.404209 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-10-09 09:47:03.188415 | orchestrator | changed: [testbed-manager] 2025-10-09 09:47:03.188607 | orchestrator | 2025-10-09 09:47:03.188626 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-10-09 09:47:03.987063 | orchestrator | changed: [testbed-manager] 2025-10-09 09:47:03.987155 | orchestrator | 2025-10-09 09:47:03.987172 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-10-09 09:47:05.392001 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-10-09 09:47:05.392102 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-10-09 09:47:05.392117 | orchestrator | 2025-10-09 09:47:05.392143 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-10-09 09:47:06.754621 | orchestrator | changed: [testbed-manager] 2025-10-09 09:47:06.754727 | orchestrator | 2025-10-09 09:47:06.754744 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-10-09 09:47:08.540913 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-10-09 09:47:08.541695 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-10-09 09:47:08.541720 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-10-09 09:47:08.541733 | orchestrator | 2025-10-09 09:47:08.541746 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-10-09 09:47:08.596796 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:47:08.596873 | orchestrator | 2025-10-09 09:47:08.596889 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-10-09 09:47:09.200098 | orchestrator | changed: [testbed-manager] 2025-10-09 09:47:09.200205 | orchestrator | 2025-10-09 09:47:09.200225 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-10-09 09:47:09.268322 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:47:09.268392 | orchestrator | 2025-10-09 09:47:09.268408 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-10-09 09:47:10.121965 | orchestrator | changed: [testbed-manager] => (item=None) 2025-10-09 09:47:10.122095 | orchestrator | changed: [testbed-manager] 2025-10-09 09:47:10.122116 | orchestrator | 2025-10-09 09:47:10.122129 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-10-09 09:47:10.160420 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:47:10.160478 | orchestrator | 2025-10-09 09:47:10.160494 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-10-09 09:47:10.195671 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:47:10.195747 | orchestrator | 2025-10-09 09:47:10.195763 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-10-09 09:47:10.233445 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:47:10.233510 | orchestrator | 2025-10-09 09:47:10.233524 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-10-09 09:47:10.304531 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:47:10.304611 | orchestrator | 2025-10-09 09:47:10.304628 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-10-09 09:47:11.032837 | orchestrator | ok: [testbed-manager] 2025-10-09 09:47:11.032918 | orchestrator | 2025-10-09 09:47:11.032936 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-10-09 09:47:11.032951 | orchestrator | 2025-10-09 09:47:11.032965 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-09 09:47:12.523876 | orchestrator | ok: [testbed-manager] 2025-10-09 09:47:12.523943 | orchestrator | 2025-10-09 09:47:12.523957 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-10-09 09:47:13.504911 | orchestrator | changed: [testbed-manager] 2025-10-09 09:47:13.504998 | orchestrator | 2025-10-09 09:47:13.505015 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 09:47:13.505029 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-10-09 09:47:13.505072 | orchestrator | 2025-10-09 09:47:13.662624 | orchestrator | ok: Runtime: 0:07:49.461905 2025-10-09 09:47:13.674044 | 2025-10-09 09:47:13.674164 | TASK [Point out that the log in on the manager is now possible] 2025-10-09 09:47:13.722061 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-10-09 09:47:13.731746 | 2025-10-09 09:47:13.731854 | TASK [Point out that the following task takes some time and does not give any output] 2025-10-09 09:47:13.769921 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-10-09 09:47:13.780181 | 2025-10-09 09:47:13.780299 | TASK [Run manager part 1 + 2] 2025-10-09 09:47:14.694972 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-10-09 09:47:14.750181 | orchestrator | 2025-10-09 09:47:14.750257 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-10-09 09:47:14.750276 | orchestrator | 2025-10-09 09:47:14.750306 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-09 09:47:17.796121 | orchestrator | ok: [testbed-manager] 2025-10-09 09:47:17.796268 | orchestrator | 2025-10-09 09:47:17.796327 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-10-09 09:47:17.835517 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:47:17.835568 | orchestrator | 2025-10-09 09:47:17.835581 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-10-09 09:47:17.876729 | orchestrator | ok: [testbed-manager] 2025-10-09 09:47:17.876782 | orchestrator | 2025-10-09 09:47:17.876798 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-10-09 09:47:17.925920 | orchestrator | ok: [testbed-manager] 2025-10-09 09:47:17.925982 | orchestrator | 2025-10-09 09:47:17.925992 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-10-09 09:47:17.985965 | orchestrator | ok: [testbed-manager] 2025-10-09 09:47:17.986178 | orchestrator | 2025-10-09 09:47:17.986204 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-10-09 09:47:18.042125 | orchestrator | ok: [testbed-manager] 2025-10-09 09:47:18.042195 | orchestrator | 2025-10-09 09:47:18.042212 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-10-09 09:47:18.083993 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-10-09 09:47:18.084083 | orchestrator | 2025-10-09 09:47:18.084104 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-10-09 09:47:18.809981 | orchestrator | ok: [testbed-manager] 2025-10-09 09:47:18.810389 | orchestrator | 2025-10-09 09:47:18.810419 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-10-09 09:47:18.854736 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:47:18.854799 | orchestrator | 2025-10-09 09:47:18.854814 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-10-09 09:47:20.213827 | orchestrator | changed: [testbed-manager] 2025-10-09 09:47:20.213911 | orchestrator | 2025-10-09 09:47:20.213931 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-10-09 09:47:20.802493 | orchestrator | ok: [testbed-manager] 2025-10-09 09:47:20.802566 | orchestrator | 2025-10-09 09:47:20.802583 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-10-09 09:47:22.022701 | orchestrator | changed: [testbed-manager] 2025-10-09 09:47:22.022771 | orchestrator | 2025-10-09 09:47:22.022788 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-10-09 09:47:39.585114 | orchestrator | changed: [testbed-manager] 2025-10-09 09:47:39.585207 | orchestrator | 2025-10-09 09:47:39.585232 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-10-09 09:47:40.272711 | orchestrator | ok: [testbed-manager] 2025-10-09 09:47:40.272788 | orchestrator | 2025-10-09 09:47:40.272807 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-10-09 09:47:40.329484 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:47:40.329551 | orchestrator | 2025-10-09 09:47:40.329566 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-10-09 09:47:41.323335 | orchestrator | changed: [testbed-manager] 2025-10-09 09:47:41.323417 | orchestrator | 2025-10-09 09:47:41.323433 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-10-09 09:47:42.315385 | orchestrator | changed: [testbed-manager] 2025-10-09 09:47:42.315470 | orchestrator | 2025-10-09 09:47:42.315487 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-10-09 09:47:42.903486 | orchestrator | changed: [testbed-manager] 2025-10-09 09:47:42.904298 | orchestrator | 2025-10-09 09:47:42.904319 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-10-09 09:47:42.944679 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-10-09 09:47:42.944740 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-10-09 09:47:42.944746 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-10-09 09:47:42.944752 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-10-09 09:47:46.254592 | orchestrator | changed: [testbed-manager] 2025-10-09 09:47:46.254692 | orchestrator | 2025-10-09 09:47:46.254710 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-10-09 09:47:56.297455 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-10-09 09:47:56.297571 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-10-09 09:47:56.297591 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-10-09 09:47:56.297604 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-10-09 09:47:56.297623 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-10-09 09:47:56.297635 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-10-09 09:47:56.297647 | orchestrator | 2025-10-09 09:47:56.297660 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-10-09 09:47:57.356348 | orchestrator | changed: [testbed-manager] 2025-10-09 09:47:57.356429 | orchestrator | 2025-10-09 09:47:57.356445 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-10-09 09:47:57.406262 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:47:57.406304 | orchestrator | 2025-10-09 09:47:57.406313 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-10-09 09:48:00.612013 | orchestrator | changed: [testbed-manager] 2025-10-09 09:48:00.612136 | orchestrator | 2025-10-09 09:48:00.612154 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-10-09 09:48:00.657932 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:48:00.657994 | orchestrator | 2025-10-09 09:48:00.658009 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-10-09 09:49:45.027542 | orchestrator | changed: [testbed-manager] 2025-10-09 09:49:45.027631 | orchestrator | 2025-10-09 09:49:45.027649 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-10-09 09:49:46.260786 | orchestrator | ok: [testbed-manager] 2025-10-09 09:49:46.260877 | orchestrator | 2025-10-09 09:49:46.260895 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 09:49:46.260910 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-10-09 09:49:46.260921 | orchestrator | 2025-10-09 09:49:46.419257 | orchestrator | ok: Runtime: 0:02:32.263878 2025-10-09 09:49:46.431798 | 2025-10-09 09:49:46.431913 | TASK [Reboot manager] 2025-10-09 09:49:47.965463 | orchestrator | ok: Runtime: 0:00:00.990334 2025-10-09 09:49:47.980895 | 2025-10-09 09:49:47.981051 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-10-09 09:50:04.381218 | orchestrator | ok 2025-10-09 09:50:04.391425 | 2025-10-09 09:50:04.391540 | TASK [Wait a little longer for the manager so that everything is ready] 2025-10-09 09:51:04.436744 | orchestrator | ok 2025-10-09 09:51:04.445437 | 2025-10-09 09:51:04.445540 | TASK [Deploy manager + bootstrap nodes] 2025-10-09 09:51:07.183273 | orchestrator | 2025-10-09 09:51:07.183465 | orchestrator | # DEPLOY MANAGER 2025-10-09 09:51:07.183489 | orchestrator | 2025-10-09 09:51:07.183505 | orchestrator | + set -e 2025-10-09 09:51:07.183519 | orchestrator | + echo 2025-10-09 09:51:07.183534 | orchestrator | + echo '# DEPLOY MANAGER' 2025-10-09 09:51:07.183551 | orchestrator | + echo 2025-10-09 09:51:07.183602 | orchestrator | + cat /opt/manager-vars.sh 2025-10-09 09:51:07.187506 | orchestrator | export NUMBER_OF_NODES=6 2025-10-09 09:51:07.187530 | orchestrator | 2025-10-09 09:51:07.187543 | orchestrator | export CEPH_VERSION=reef 2025-10-09 09:51:07.187557 | orchestrator | export CONFIGURATION_VERSION=main 2025-10-09 09:51:07.187569 | orchestrator | export MANAGER_VERSION=latest 2025-10-09 09:51:07.187591 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-10-09 09:51:07.187603 | orchestrator | 2025-10-09 09:51:07.187621 | orchestrator | export ARA=false 2025-10-09 09:51:07.187633 | orchestrator | export DEPLOY_MODE=manager 2025-10-09 09:51:07.187650 | orchestrator | export TEMPEST=false 2025-10-09 09:51:07.187662 | orchestrator | export IS_ZUUL=true 2025-10-09 09:51:07.187672 | orchestrator | 2025-10-09 09:51:07.187691 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.25 2025-10-09 09:51:07.187703 | orchestrator | export EXTERNAL_API=false 2025-10-09 09:51:07.187714 | orchestrator | 2025-10-09 09:51:07.187725 | orchestrator | export IMAGE_USER=ubuntu 2025-10-09 09:51:07.187740 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-10-09 09:51:07.187751 | orchestrator | 2025-10-09 09:51:07.187762 | orchestrator | export CEPH_STACK=ceph-ansible 2025-10-09 09:51:07.187777 | orchestrator | 2025-10-09 09:51:07.187788 | orchestrator | + echo 2025-10-09 09:51:07.187801 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-10-09 09:51:07.189045 | orchestrator | ++ export INTERACTIVE=false 2025-10-09 09:51:07.189063 | orchestrator | ++ INTERACTIVE=false 2025-10-09 09:51:07.189077 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-10-09 09:51:07.189089 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-10-09 09:51:07.189185 | orchestrator | + source /opt/manager-vars.sh 2025-10-09 09:51:07.189201 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-10-09 09:51:07.189212 | orchestrator | ++ NUMBER_OF_NODES=6 2025-10-09 09:51:07.189227 | orchestrator | ++ export CEPH_VERSION=reef 2025-10-09 09:51:07.189238 | orchestrator | ++ CEPH_VERSION=reef 2025-10-09 09:51:07.189249 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-10-09 09:51:07.189260 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-10-09 09:51:07.189271 | orchestrator | ++ export MANAGER_VERSION=latest 2025-10-09 09:51:07.189282 | orchestrator | ++ MANAGER_VERSION=latest 2025-10-09 09:51:07.189293 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-10-09 09:51:07.189312 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-10-09 09:51:07.189327 | orchestrator | ++ export ARA=false 2025-10-09 09:51:07.189338 | orchestrator | ++ ARA=false 2025-10-09 09:51:07.189349 | orchestrator | ++ export DEPLOY_MODE=manager 2025-10-09 09:51:07.189360 | orchestrator | ++ DEPLOY_MODE=manager 2025-10-09 09:51:07.189371 | orchestrator | ++ export TEMPEST=false 2025-10-09 09:51:07.189382 | orchestrator | ++ TEMPEST=false 2025-10-09 09:51:07.189393 | orchestrator | ++ export IS_ZUUL=true 2025-10-09 09:51:07.189403 | orchestrator | ++ IS_ZUUL=true 2025-10-09 09:51:07.189418 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.25 2025-10-09 09:51:07.189429 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.25 2025-10-09 09:51:07.189440 | orchestrator | ++ export EXTERNAL_API=false 2025-10-09 09:51:07.189451 | orchestrator | ++ EXTERNAL_API=false 2025-10-09 09:51:07.189461 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-10-09 09:51:07.189472 | orchestrator | ++ IMAGE_USER=ubuntu 2025-10-09 09:51:07.189571 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-10-09 09:51:07.189586 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-10-09 09:51:07.189597 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-10-09 09:51:07.189608 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-10-09 09:51:07.189620 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-10-09 09:51:07.258252 | orchestrator | + docker version 2025-10-09 09:51:07.527445 | orchestrator | Client: Docker Engine - Community 2025-10-09 09:51:07.527489 | orchestrator | Version: 27.5.1 2025-10-09 09:51:07.527502 | orchestrator | API version: 1.47 2025-10-09 09:51:07.527513 | orchestrator | Go version: go1.22.11 2025-10-09 09:51:07.527524 | orchestrator | Git commit: 9f9e405 2025-10-09 09:51:07.527535 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-10-09 09:51:07.527546 | orchestrator | OS/Arch: linux/amd64 2025-10-09 09:51:07.527557 | orchestrator | Context: default 2025-10-09 09:51:07.527568 | orchestrator | 2025-10-09 09:51:07.527579 | orchestrator | Server: Docker Engine - Community 2025-10-09 09:51:07.527590 | orchestrator | Engine: 2025-10-09 09:51:07.527601 | orchestrator | Version: 27.5.1 2025-10-09 09:51:07.527611 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-10-09 09:51:07.527649 | orchestrator | Go version: go1.22.11 2025-10-09 09:51:07.527660 | orchestrator | Git commit: 4c9b3b0 2025-10-09 09:51:07.527671 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-10-09 09:51:07.527681 | orchestrator | OS/Arch: linux/amd64 2025-10-09 09:51:07.527692 | orchestrator | Experimental: false 2025-10-09 09:51:07.527703 | orchestrator | containerd: 2025-10-09 09:51:07.527714 | orchestrator | Version: v1.7.28 2025-10-09 09:51:07.527725 | orchestrator | GitCommit: b98a3aace656320842a23f4a392a33f46af97866 2025-10-09 09:51:07.527736 | orchestrator | runc: 2025-10-09 09:51:07.527747 | orchestrator | Version: 1.3.0 2025-10-09 09:51:07.527758 | orchestrator | GitCommit: v1.3.0-0-g4ca628d1 2025-10-09 09:51:07.527769 | orchestrator | docker-init: 2025-10-09 09:51:07.527779 | orchestrator | Version: 0.19.0 2025-10-09 09:51:07.527791 | orchestrator | GitCommit: de40ad0 2025-10-09 09:51:07.529824 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-10-09 09:51:07.538525 | orchestrator | + set -e 2025-10-09 09:51:07.538553 | orchestrator | + source /opt/manager-vars.sh 2025-10-09 09:51:07.538564 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-10-09 09:51:07.538575 | orchestrator | ++ NUMBER_OF_NODES=6 2025-10-09 09:51:07.538586 | orchestrator | ++ export CEPH_VERSION=reef 2025-10-09 09:51:07.538597 | orchestrator | ++ CEPH_VERSION=reef 2025-10-09 09:51:07.538608 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-10-09 09:51:07.538619 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-10-09 09:51:07.538629 | orchestrator | ++ export MANAGER_VERSION=latest 2025-10-09 09:51:07.538640 | orchestrator | ++ MANAGER_VERSION=latest 2025-10-09 09:51:07.538651 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-10-09 09:51:07.538661 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-10-09 09:51:07.538673 | orchestrator | ++ export ARA=false 2025-10-09 09:51:07.538683 | orchestrator | ++ ARA=false 2025-10-09 09:51:07.538694 | orchestrator | ++ export DEPLOY_MODE=manager 2025-10-09 09:51:07.538705 | orchestrator | ++ DEPLOY_MODE=manager 2025-10-09 09:51:07.538715 | orchestrator | ++ export TEMPEST=false 2025-10-09 09:51:07.538726 | orchestrator | ++ TEMPEST=false 2025-10-09 09:51:07.538736 | orchestrator | ++ export IS_ZUUL=true 2025-10-09 09:51:07.538747 | orchestrator | ++ IS_ZUUL=true 2025-10-09 09:51:07.538758 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.25 2025-10-09 09:51:07.538769 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.25 2025-10-09 09:51:07.538780 | orchestrator | ++ export EXTERNAL_API=false 2025-10-09 09:51:07.538791 | orchestrator | ++ EXTERNAL_API=false 2025-10-09 09:51:07.538801 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-10-09 09:51:07.538812 | orchestrator | ++ IMAGE_USER=ubuntu 2025-10-09 09:51:07.538823 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-10-09 09:51:07.538834 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-10-09 09:51:07.538849 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-10-09 09:51:07.538860 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-10-09 09:51:07.538871 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-10-09 09:51:07.538882 | orchestrator | ++ export INTERACTIVE=false 2025-10-09 09:51:07.538893 | orchestrator | ++ INTERACTIVE=false 2025-10-09 09:51:07.538903 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-10-09 09:51:07.538915 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-10-09 09:51:07.539000 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-10-09 09:51:07.539013 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-10-09 09:51:07.539025 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-10-09 09:51:07.543134 | orchestrator | + set -e 2025-10-09 09:51:07.543153 | orchestrator | + VERSION=reef 2025-10-09 09:51:07.543888 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-10-09 09:51:07.551036 | orchestrator | + [[ -n ceph_version: reef ]] 2025-10-09 09:51:07.551058 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-10-09 09:51:07.554344 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-10-09 09:51:07.558193 | orchestrator | + set -e 2025-10-09 09:51:07.558212 | orchestrator | + VERSION=2024.2 2025-10-09 09:51:07.558767 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-10-09 09:51:07.563057 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-10-09 09:51:07.563077 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-10-09 09:51:07.569445 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-10-09 09:51:07.570622 | orchestrator | ++ semver latest 7.0.0 2025-10-09 09:51:07.623726 | orchestrator | + [[ -1 -ge 0 ]] 2025-10-09 09:51:07.623755 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-10-09 09:51:07.623767 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-10-09 09:51:07.623778 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-10-09 09:51:07.732172 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-10-09 09:51:07.734067 | orchestrator | + source /opt/venv/bin/activate 2025-10-09 09:51:07.735386 | orchestrator | ++ deactivate nondestructive 2025-10-09 09:51:07.735403 | orchestrator | ++ '[' -n '' ']' 2025-10-09 09:51:07.735416 | orchestrator | ++ '[' -n '' ']' 2025-10-09 09:51:07.735428 | orchestrator | ++ hash -r 2025-10-09 09:51:07.735439 | orchestrator | ++ '[' -n '' ']' 2025-10-09 09:51:07.735450 | orchestrator | ++ unset VIRTUAL_ENV 2025-10-09 09:51:07.735460 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-10-09 09:51:07.735471 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-10-09 09:51:07.735482 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-10-09 09:51:07.735494 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-10-09 09:51:07.735505 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-10-09 09:51:07.735516 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-10-09 09:51:07.735527 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-10-09 09:51:07.735538 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-10-09 09:51:07.735550 | orchestrator | ++ export PATH 2025-10-09 09:51:07.735565 | orchestrator | ++ '[' -n '' ']' 2025-10-09 09:51:07.735576 | orchestrator | ++ '[' -z '' ']' 2025-10-09 09:51:07.735587 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-10-09 09:51:07.735598 | orchestrator | ++ PS1='(venv) ' 2025-10-09 09:51:07.735609 | orchestrator | ++ export PS1 2025-10-09 09:51:07.735619 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-10-09 09:51:07.735630 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-10-09 09:51:07.735641 | orchestrator | ++ hash -r 2025-10-09 09:51:07.735880 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-10-09 09:51:09.253607 | orchestrator | 2025-10-09 09:51:09.253718 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-10-09 09:51:09.253735 | orchestrator | 2025-10-09 09:51:09.253747 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-10-09 09:51:09.965713 | orchestrator | ok: [testbed-manager] 2025-10-09 09:51:09.965808 | orchestrator | 2025-10-09 09:51:09.965824 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-10-09 09:51:11.028808 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:11.028912 | orchestrator | 2025-10-09 09:51:11.028951 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-10-09 09:51:11.028963 | orchestrator | 2025-10-09 09:51:11.028974 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-09 09:51:13.560369 | orchestrator | ok: [testbed-manager] 2025-10-09 09:51:13.560478 | orchestrator | 2025-10-09 09:51:13.560495 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-10-09 09:51:13.620808 | orchestrator | ok: [testbed-manager] 2025-10-09 09:51:13.620861 | orchestrator | 2025-10-09 09:51:13.620876 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-10-09 09:51:14.123744 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:14.123850 | orchestrator | 2025-10-09 09:51:14.123866 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-10-09 09:51:14.161672 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:51:14.161713 | orchestrator | 2025-10-09 09:51:14.161724 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-10-09 09:51:14.504519 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:14.504621 | orchestrator | 2025-10-09 09:51:14.504636 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-10-09 09:51:14.562575 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:51:14.562754 | orchestrator | 2025-10-09 09:51:14.562771 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-10-09 09:51:14.907710 | orchestrator | ok: [testbed-manager] 2025-10-09 09:51:14.907748 | orchestrator | 2025-10-09 09:51:14.907760 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-10-09 09:51:15.076265 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:51:15.076335 | orchestrator | 2025-10-09 09:51:15.076347 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-10-09 09:51:15.076358 | orchestrator | 2025-10-09 09:51:15.076372 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-09 09:51:16.896824 | orchestrator | ok: [testbed-manager] 2025-10-09 09:51:16.896986 | orchestrator | 2025-10-09 09:51:16.897005 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-10-09 09:51:17.021309 | orchestrator | included: osism.services.traefik for testbed-manager 2025-10-09 09:51:17.021408 | orchestrator | 2025-10-09 09:51:17.021425 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-10-09 09:51:17.073087 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-10-09 09:51:17.073152 | orchestrator | 2025-10-09 09:51:17.073167 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-10-09 09:51:18.211877 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-10-09 09:51:18.212017 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-10-09 09:51:18.212033 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-10-09 09:51:18.212044 | orchestrator | 2025-10-09 09:51:18.212057 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-10-09 09:51:20.097760 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-10-09 09:51:20.097870 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-10-09 09:51:20.097888 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-10-09 09:51:20.097901 | orchestrator | 2025-10-09 09:51:20.097948 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-10-09 09:51:20.830212 | orchestrator | changed: [testbed-manager] => (item=None) 2025-10-09 09:51:20.830302 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:20.830317 | orchestrator | 2025-10-09 09:51:20.830330 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-10-09 09:51:21.522884 | orchestrator | changed: [testbed-manager] => (item=None) 2025-10-09 09:51:21.523002 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:21.523017 | orchestrator | 2025-10-09 09:51:21.523029 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-10-09 09:51:21.580881 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:51:21.580937 | orchestrator | 2025-10-09 09:51:21.580950 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-10-09 09:51:21.963793 | orchestrator | ok: [testbed-manager] 2025-10-09 09:51:21.963870 | orchestrator | 2025-10-09 09:51:21.963884 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-10-09 09:51:22.048286 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-10-09 09:51:22.048393 | orchestrator | 2025-10-09 09:51:22.048406 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-10-09 09:51:23.151736 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:23.151845 | orchestrator | 2025-10-09 09:51:23.151858 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-10-09 09:51:24.068636 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:24.068771 | orchestrator | 2025-10-09 09:51:24.068789 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-10-09 09:51:36.551129 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:36.551265 | orchestrator | 2025-10-09 09:51:36.551281 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-10-09 09:51:36.619141 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:51:36.619203 | orchestrator | 2025-10-09 09:51:36.619217 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-10-09 09:51:36.619228 | orchestrator | 2025-10-09 09:51:36.619239 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-09 09:51:38.502616 | orchestrator | ok: [testbed-manager] 2025-10-09 09:51:38.502742 | orchestrator | 2025-10-09 09:51:38.502795 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-10-09 09:51:38.653143 | orchestrator | included: osism.services.manager for testbed-manager 2025-10-09 09:51:38.653251 | orchestrator | 2025-10-09 09:51:38.653266 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-10-09 09:51:38.719417 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-10-09 09:51:38.719448 | orchestrator | 2025-10-09 09:51:38.719460 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-10-09 09:51:41.573673 | orchestrator | ok: [testbed-manager] 2025-10-09 09:51:41.573797 | orchestrator | 2025-10-09 09:51:41.573811 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-10-09 09:51:41.624546 | orchestrator | ok: [testbed-manager] 2025-10-09 09:51:41.624590 | orchestrator | 2025-10-09 09:51:41.624605 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-10-09 09:51:41.761350 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-10-09 09:51:41.761407 | orchestrator | 2025-10-09 09:51:41.761420 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-10-09 09:51:44.734693 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-10-09 09:51:44.734812 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-10-09 09:51:44.734824 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-10-09 09:51:44.734834 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-10-09 09:51:44.734843 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-10-09 09:51:44.734853 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-10-09 09:51:44.734862 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-10-09 09:51:44.734870 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-10-09 09:51:44.734880 | orchestrator | 2025-10-09 09:51:44.734942 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-10-09 09:51:45.401729 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:45.401855 | orchestrator | 2025-10-09 09:51:45.401870 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-10-09 09:51:46.071116 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:46.071234 | orchestrator | 2025-10-09 09:51:46.071249 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-10-09 09:51:46.144214 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-10-09 09:51:46.144265 | orchestrator | 2025-10-09 09:51:46.144278 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-10-09 09:51:47.459489 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-10-09 09:51:47.459607 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-10-09 09:51:47.459619 | orchestrator | 2025-10-09 09:51:47.459631 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-10-09 09:51:48.178368 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:48.178470 | orchestrator | 2025-10-09 09:51:48.178479 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-10-09 09:51:48.236523 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:51:48.236547 | orchestrator | 2025-10-09 09:51:48.236555 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-10-09 09:51:48.309418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-10-09 09:51:48.309434 | orchestrator | 2025-10-09 09:51:48.309441 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-10-09 09:51:48.978426 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:48.978531 | orchestrator | 2025-10-09 09:51:48.978541 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-10-09 09:51:49.042303 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-10-09 09:51:49.042439 | orchestrator | 2025-10-09 09:51:49.042466 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-10-09 09:51:50.514125 | orchestrator | changed: [testbed-manager] => (item=None) 2025-10-09 09:51:50.514220 | orchestrator | changed: [testbed-manager] => (item=None) 2025-10-09 09:51:50.514226 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:50.514233 | orchestrator | 2025-10-09 09:51:50.514239 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-10-09 09:51:51.169807 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:51.169956 | orchestrator | 2025-10-09 09:51:51.169972 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-10-09 09:51:51.223316 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:51:51.223368 | orchestrator | 2025-10-09 09:51:51.223382 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-10-09 09:51:51.308820 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-10-09 09:51:51.308847 | orchestrator | 2025-10-09 09:51:51.308860 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-10-09 09:51:51.876462 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:51.876573 | orchestrator | 2025-10-09 09:51:51.876587 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-10-09 09:51:52.345562 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:52.345663 | orchestrator | 2025-10-09 09:51:52.345677 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-10-09 09:51:53.628858 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-10-09 09:51:53.629001 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-10-09 09:51:53.629016 | orchestrator | 2025-10-09 09:51:53.629029 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-10-09 09:51:54.342092 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:54.342193 | orchestrator | 2025-10-09 09:51:54.342208 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-10-09 09:51:54.748640 | orchestrator | ok: [testbed-manager] 2025-10-09 09:51:54.748737 | orchestrator | 2025-10-09 09:51:54.748752 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-10-09 09:51:55.145435 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:55.145532 | orchestrator | 2025-10-09 09:51:55.145546 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-10-09 09:51:55.199338 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:51:55.199444 | orchestrator | 2025-10-09 09:51:55.199461 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-10-09 09:51:55.276501 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-10-09 09:51:55.276594 | orchestrator | 2025-10-09 09:51:55.276609 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-10-09 09:51:55.325170 | orchestrator | ok: [testbed-manager] 2025-10-09 09:51:55.325249 | orchestrator | 2025-10-09 09:51:55.325260 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-10-09 09:51:57.561531 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-10-09 09:51:57.561630 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-10-09 09:51:57.561644 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-10-09 09:51:57.561655 | orchestrator | 2025-10-09 09:51:57.561667 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-10-09 09:51:58.322347 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:58.322439 | orchestrator | 2025-10-09 09:51:58.322456 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-10-09 09:51:59.064490 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:59.064605 | orchestrator | 2025-10-09 09:51:59.064621 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-10-09 09:51:59.848218 | orchestrator | changed: [testbed-manager] 2025-10-09 09:51:59.848343 | orchestrator | 2025-10-09 09:51:59.848370 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-10-09 09:51:59.919518 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-10-09 09:51:59.919597 | orchestrator | 2025-10-09 09:51:59.919612 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-10-09 09:51:59.955017 | orchestrator | ok: [testbed-manager] 2025-10-09 09:51:59.955091 | orchestrator | 2025-10-09 09:51:59.955117 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-10-09 09:52:00.761796 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-10-09 09:52:00.761948 | orchestrator | 2025-10-09 09:52:00.761966 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-10-09 09:52:00.849101 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-10-09 09:52:00.849177 | orchestrator | 2025-10-09 09:52:00.849192 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-10-09 09:52:01.651983 | orchestrator | changed: [testbed-manager] 2025-10-09 09:52:01.652080 | orchestrator | 2025-10-09 09:52:01.652097 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-10-09 09:52:02.254144 | orchestrator | ok: [testbed-manager] 2025-10-09 09:52:02.254235 | orchestrator | 2025-10-09 09:52:02.254251 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-10-09 09:52:02.312691 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:52:02.312716 | orchestrator | 2025-10-09 09:52:02.312728 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-10-09 09:52:02.369729 | orchestrator | ok: [testbed-manager] 2025-10-09 09:52:02.369772 | orchestrator | 2025-10-09 09:52:02.369786 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-10-09 09:52:03.311167 | orchestrator | changed: [testbed-manager] 2025-10-09 09:52:03.311268 | orchestrator | 2025-10-09 09:52:03.311284 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-10-09 09:53:14.032606 | orchestrator | changed: [testbed-manager] 2025-10-09 09:53:14.032729 | orchestrator | 2025-10-09 09:53:14.032745 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-10-09 09:53:15.060739 | orchestrator | ok: [testbed-manager] 2025-10-09 09:53:15.060898 | orchestrator | 2025-10-09 09:53:15.060924 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-10-09 09:53:15.155974 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:53:15.156057 | orchestrator | 2025-10-09 09:53:15.156070 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-10-09 09:53:20.687349 | orchestrator | changed: [testbed-manager] 2025-10-09 09:53:20.687462 | orchestrator | 2025-10-09 09:53:20.687479 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-10-09 09:53:20.736290 | orchestrator | ok: [testbed-manager] 2025-10-09 09:53:20.736364 | orchestrator | 2025-10-09 09:53:20.736380 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-10-09 09:53:20.736393 | orchestrator | 2025-10-09 09:53:20.736404 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-10-09 09:53:20.792195 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:53:20.792252 | orchestrator | 2025-10-09 09:53:20.792266 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-10-09 09:54:20.840322 | orchestrator | Pausing for 60 seconds 2025-10-09 09:54:20.840440 | orchestrator | changed: [testbed-manager] 2025-10-09 09:54:20.840457 | orchestrator | 2025-10-09 09:54:20.840471 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-10-09 09:54:26.956730 | orchestrator | changed: [testbed-manager] 2025-10-09 09:54:26.956900 | orchestrator | 2025-10-09 09:54:26.956920 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-10-09 09:55:29.334336 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-10-09 09:55:29.334469 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-10-09 09:55:29.334483 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2025-10-09 09:55:29.334525 | orchestrator | changed: [testbed-manager] 2025-10-09 09:55:29.334539 | orchestrator | 2025-10-09 09:55:29.334550 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-10-09 09:55:40.753911 | orchestrator | changed: [testbed-manager] 2025-10-09 09:55:40.754165 | orchestrator | 2025-10-09 09:55:40.754186 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-10-09 09:55:40.842211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-10-09 09:55:40.842240 | orchestrator | 2025-10-09 09:55:40.842254 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-10-09 09:55:40.842266 | orchestrator | 2025-10-09 09:55:40.842279 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-10-09 09:55:40.901918 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:55:40.901948 | orchestrator | 2025-10-09 09:55:40.901961 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2025-10-09 09:55:40.972599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2025-10-09 09:55:40.972628 | orchestrator | 2025-10-09 09:55:40.972641 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2025-10-09 09:55:41.819783 | orchestrator | changed: [testbed-manager] 2025-10-09 09:55:41.819900 | orchestrator | 2025-10-09 09:55:41.819916 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2025-10-09 09:55:45.979529 | orchestrator | ok: [testbed-manager] 2025-10-09 09:55:46.323283 | orchestrator | 2025-10-09 09:55:46.323384 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2025-10-09 09:55:46.323424 | orchestrator | ok: [testbed-manager] => { 2025-10-09 09:55:46.323438 | orchestrator | "version_check_result.stdout_lines": [ 2025-10-09 09:55:46.323450 | orchestrator | "=== OSISM Container Version Check ===", 2025-10-09 09:55:46.323461 | orchestrator | "Checking running containers against expected versions...", 2025-10-09 09:55:46.323473 | orchestrator | "", 2025-10-09 09:55:46.323484 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2025-10-09 09:55:46.323495 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2025-10-09 09:55:46.323506 | orchestrator | " Enabled: true", 2025-10-09 09:55:46.323517 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2025-10-09 09:55:46.323529 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:55:46.323539 | orchestrator | "", 2025-10-09 09:55:46.323550 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2025-10-09 09:55:46.323562 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2025-10-09 09:55:46.323572 | orchestrator | " Enabled: true", 2025-10-09 09:55:46.323583 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2025-10-09 09:55:46.323594 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:55:46.323605 | orchestrator | "", 2025-10-09 09:55:46.323616 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2025-10-09 09:55:46.323627 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2025-10-09 09:55:46.323637 | orchestrator | " Enabled: true", 2025-10-09 09:55:46.323648 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2025-10-09 09:55:46.323659 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:55:46.323670 | orchestrator | "", 2025-10-09 09:55:46.323681 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2025-10-09 09:55:46.323692 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2025-10-09 09:55:46.323755 | orchestrator | " Enabled: true", 2025-10-09 09:55:46.323767 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2025-10-09 09:55:46.323778 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:55:46.323789 | orchestrator | "", 2025-10-09 09:55:46.323800 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2025-10-09 09:55:46.323841 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-10-09 09:55:46.323853 | orchestrator | " Enabled: true", 2025-10-09 09:55:46.323864 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-10-09 09:55:46.323875 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:55:46.323885 | orchestrator | "", 2025-10-09 09:55:46.323896 | orchestrator | "Checking service: osismclient (OSISM Client)", 2025-10-09 09:55:46.323908 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-10-09 09:55:46.323919 | orchestrator | " Enabled: true", 2025-10-09 09:55:46.323930 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-10-09 09:55:46.323941 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:55:46.323951 | orchestrator | "", 2025-10-09 09:55:46.323962 | orchestrator | "Checking service: ara-server (ARA Server)", 2025-10-09 09:55:46.323973 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2025-10-09 09:55:46.323984 | orchestrator | " Enabled: true", 2025-10-09 09:55:46.323995 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2025-10-09 09:55:46.324005 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:55:46.324016 | orchestrator | "", 2025-10-09 09:55:46.324034 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2025-10-09 09:55:46.324045 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-10-09 09:55:46.324055 | orchestrator | " Enabled: true", 2025-10-09 09:55:46.324067 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-10-09 09:55:46.324078 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:55:46.324088 | orchestrator | "", 2025-10-09 09:55:46.324099 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2025-10-09 09:55:46.324110 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2025-10-09 09:55:46.324126 | orchestrator | " Enabled: true", 2025-10-09 09:55:46.324138 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2025-10-09 09:55:46.324149 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:55:46.324160 | orchestrator | "", 2025-10-09 09:55:46.324171 | orchestrator | "Checking service: redis (Redis Cache)", 2025-10-09 09:55:46.324182 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-10-09 09:55:46.324193 | orchestrator | " Enabled: true", 2025-10-09 09:55:46.324204 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-10-09 09:55:46.324214 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:55:46.324225 | orchestrator | "", 2025-10-09 09:55:46.324236 | orchestrator | "Checking service: api (OSISM API Service)", 2025-10-09 09:55:46.324247 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-10-09 09:55:46.324258 | orchestrator | " Enabled: true", 2025-10-09 09:55:46.324268 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-10-09 09:55:46.324279 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:55:46.324290 | orchestrator | "", 2025-10-09 09:55:46.324301 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2025-10-09 09:55:46.324311 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-10-09 09:55:46.324322 | orchestrator | " Enabled: true", 2025-10-09 09:55:46.324333 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-10-09 09:55:46.324344 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:55:46.324355 | orchestrator | "", 2025-10-09 09:55:46.324366 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2025-10-09 09:55:46.324376 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-10-09 09:55:46.324387 | orchestrator | " Enabled: true", 2025-10-09 09:55:46.324398 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-10-09 09:55:46.324409 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:55:46.324420 | orchestrator | "", 2025-10-09 09:55:46.324430 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2025-10-09 09:55:46.324441 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-10-09 09:55:46.324452 | orchestrator | " Enabled: true", 2025-10-09 09:55:46.324473 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-10-09 09:55:46.324484 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:55:46.324495 | orchestrator | "", 2025-10-09 09:55:46.324505 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2025-10-09 09:55:46.324530 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-10-09 09:55:46.324541 | orchestrator | " Enabled: true", 2025-10-09 09:55:46.324552 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-10-09 09:55:46.324562 | orchestrator | " Status: ✅ MATCH", 2025-10-09 09:55:46.324573 | orchestrator | "", 2025-10-09 09:55:46.324584 | orchestrator | "=== Summary ===", 2025-10-09 09:55:46.324595 | orchestrator | "Errors (version mismatches): 0", 2025-10-09 09:55:46.324605 | orchestrator | "Warnings (expected containers not running): 0", 2025-10-09 09:55:46.324616 | orchestrator | "", 2025-10-09 09:55:46.324627 | orchestrator | "✅ All running containers match expected versions!" 2025-10-09 09:55:46.324638 | orchestrator | ] 2025-10-09 09:55:46.324649 | orchestrator | } 2025-10-09 09:55:46.324661 | orchestrator | 2025-10-09 09:55:46.324672 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2025-10-09 09:55:46.324684 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:55:46.324695 | orchestrator | 2025-10-09 09:55:46.324723 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 09:55:46.324734 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-10-09 09:55:46.324745 | orchestrator | 2025-10-09 09:55:46.324756 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-10-09 09:55:46.324767 | orchestrator | + deactivate 2025-10-09 09:55:46.324778 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-10-09 09:55:46.324791 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-10-09 09:55:46.324801 | orchestrator | + export PATH 2025-10-09 09:55:46.324812 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-10-09 09:55:46.324823 | orchestrator | + '[' -n '' ']' 2025-10-09 09:55:46.324834 | orchestrator | + hash -r 2025-10-09 09:55:46.324845 | orchestrator | + '[' -n '' ']' 2025-10-09 09:55:46.324856 | orchestrator | + unset VIRTUAL_ENV 2025-10-09 09:55:46.324866 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-10-09 09:55:46.324877 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-10-09 09:55:46.324888 | orchestrator | + unset -f deactivate 2025-10-09 09:55:46.324899 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-10-09 09:55:46.324910 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-10-09 09:55:46.324921 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-10-09 09:55:46.324932 | orchestrator | + local max_attempts=60 2025-10-09 09:55:46.324942 | orchestrator | + local name=ceph-ansible 2025-10-09 09:55:46.324953 | orchestrator | + local attempt_num=1 2025-10-09 09:55:46.324964 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 09:55:46.324975 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-10-09 09:55:46.324986 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-10-09 09:55:46.324996 | orchestrator | + local max_attempts=60 2025-10-09 09:55:46.325007 | orchestrator | + local name=kolla-ansible 2025-10-09 09:55:46.325018 | orchestrator | + local attempt_num=1 2025-10-09 09:55:46.325042 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-10-09 09:55:46.328226 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-10-09 09:55:46.328251 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-10-09 09:55:46.328262 | orchestrator | + local max_attempts=60 2025-10-09 09:55:46.328273 | orchestrator | + local name=osism-ansible 2025-10-09 09:55:46.328284 | orchestrator | + local attempt_num=1 2025-10-09 09:55:46.329046 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-10-09 09:55:46.364470 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-10-09 09:55:46.364494 | orchestrator | + [[ true == \t\r\u\e ]] 2025-10-09 09:55:46.364506 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-10-09 09:55:47.150625 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-10-09 09:55:47.410835 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-10-09 09:55:47.411338 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2025-10-09 09:55:47.411363 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2025-10-09 09:55:47.411373 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-10-09 09:55:47.411385 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2025-10-09 09:55:47.411396 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2025-10-09 09:55:47.411425 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2025-10-09 09:55:47.411435 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2025-10-09 09:55:47.411445 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2025-10-09 09:55:47.411454 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2025-10-09 09:55:47.411464 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2025-10-09 09:55:47.411474 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2025-10-09 09:55:47.411483 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2025-10-09 09:55:47.411492 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2025-10-09 09:55:47.411502 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2025-10-09 09:55:47.411512 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2025-10-09 09:55:47.421019 | orchestrator | ++ semver latest 7.0.0 2025-10-09 09:55:47.462821 | orchestrator | + [[ -1 -ge 0 ]] 2025-10-09 09:55:47.462840 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-10-09 09:55:47.462851 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-10-09 09:55:47.466775 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-10-09 09:55:59.797103 | orchestrator | 2025-10-09 09:55:59 | INFO  | Task ed1f7dbc-7de1-40b0-a3d6-c806c82c731e (resolvconf) was prepared for execution. 2025-10-09 09:55:59.797217 | orchestrator | 2025-10-09 09:55:59 | INFO  | It takes a moment until task ed1f7dbc-7de1-40b0-a3d6-c806c82c731e (resolvconf) has been started and output is visible here. 2025-10-09 09:56:14.671787 | orchestrator | 2025-10-09 09:56:14.671904 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-10-09 09:56:14.671922 | orchestrator | 2025-10-09 09:56:14.671934 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-09 09:56:14.671946 | orchestrator | Thursday 09 October 2025 09:56:04 +0000 (0:00:00.148) 0:00:00.148 ****** 2025-10-09 09:56:14.671958 | orchestrator | ok: [testbed-manager] 2025-10-09 09:56:14.671970 | orchestrator | 2025-10-09 09:56:14.671981 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-10-09 09:56:14.671993 | orchestrator | Thursday 09 October 2025 09:56:08 +0000 (0:00:04.011) 0:00:04.159 ****** 2025-10-09 09:56:14.672004 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:56:14.672016 | orchestrator | 2025-10-09 09:56:14.672027 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-10-09 09:56:14.672038 | orchestrator | Thursday 09 October 2025 09:56:08 +0000 (0:00:00.068) 0:00:04.227 ****** 2025-10-09 09:56:14.672050 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-10-09 09:56:14.672061 | orchestrator | 2025-10-09 09:56:14.672082 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-10-09 09:56:14.672094 | orchestrator | Thursday 09 October 2025 09:56:08 +0000 (0:00:00.100) 0:00:04.328 ****** 2025-10-09 09:56:14.672105 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-10-09 09:56:14.672116 | orchestrator | 2025-10-09 09:56:14.672127 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-10-09 09:56:14.672138 | orchestrator | Thursday 09 October 2025 09:56:08 +0000 (0:00:00.104) 0:00:04.433 ****** 2025-10-09 09:56:14.672149 | orchestrator | ok: [testbed-manager] 2025-10-09 09:56:14.672160 | orchestrator | 2025-10-09 09:56:14.672171 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-10-09 09:56:14.672181 | orchestrator | Thursday 09 October 2025 09:56:09 +0000 (0:00:01.284) 0:00:05.717 ****** 2025-10-09 09:56:14.672193 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:56:14.672204 | orchestrator | 2025-10-09 09:56:14.672215 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-10-09 09:56:14.672225 | orchestrator | Thursday 09 October 2025 09:56:09 +0000 (0:00:00.058) 0:00:05.776 ****** 2025-10-09 09:56:14.672236 | orchestrator | ok: [testbed-manager] 2025-10-09 09:56:14.672246 | orchestrator | 2025-10-09 09:56:14.672257 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-10-09 09:56:14.672270 | orchestrator | Thursday 09 October 2025 09:56:10 +0000 (0:00:00.522) 0:00:06.298 ****** 2025-10-09 09:56:14.672283 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:56:14.672295 | orchestrator | 2025-10-09 09:56:14.672308 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-10-09 09:56:14.672322 | orchestrator | Thursday 09 October 2025 09:56:10 +0000 (0:00:00.084) 0:00:06.382 ****** 2025-10-09 09:56:14.672335 | orchestrator | changed: [testbed-manager] 2025-10-09 09:56:14.672347 | orchestrator | 2025-10-09 09:56:14.672359 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-10-09 09:56:14.672372 | orchestrator | Thursday 09 October 2025 09:56:10 +0000 (0:00:00.567) 0:00:06.949 ****** 2025-10-09 09:56:14.672384 | orchestrator | changed: [testbed-manager] 2025-10-09 09:56:14.672397 | orchestrator | 2025-10-09 09:56:14.672409 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-10-09 09:56:14.672421 | orchestrator | Thursday 09 October 2025 09:56:12 +0000 (0:00:01.150) 0:00:08.099 ****** 2025-10-09 09:56:14.672434 | orchestrator | ok: [testbed-manager] 2025-10-09 09:56:14.672446 | orchestrator | 2025-10-09 09:56:14.672459 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-10-09 09:56:14.672494 | orchestrator | Thursday 09 October 2025 09:56:13 +0000 (0:00:01.020) 0:00:09.120 ****** 2025-10-09 09:56:14.672508 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-10-09 09:56:14.672520 | orchestrator | 2025-10-09 09:56:14.672532 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-10-09 09:56:14.672544 | orchestrator | Thursday 09 October 2025 09:56:13 +0000 (0:00:00.072) 0:00:09.193 ****** 2025-10-09 09:56:14.672557 | orchestrator | changed: [testbed-manager] 2025-10-09 09:56:14.672569 | orchestrator | 2025-10-09 09:56:14.672582 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 09:56:14.672596 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-09 09:56:14.672610 | orchestrator | 2025-10-09 09:56:14.672621 | orchestrator | 2025-10-09 09:56:14.672632 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 09:56:14.672643 | orchestrator | Thursday 09 October 2025 09:56:14 +0000 (0:00:01.239) 0:00:10.432 ****** 2025-10-09 09:56:14.672654 | orchestrator | =============================================================================== 2025-10-09 09:56:14.672664 | orchestrator | Gathering Facts --------------------------------------------------------- 4.01s 2025-10-09 09:56:14.672675 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.28s 2025-10-09 09:56:14.672705 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.24s 2025-10-09 09:56:14.672716 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.15s 2025-10-09 09:56:14.672727 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.02s 2025-10-09 09:56:14.672737 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.57s 2025-10-09 09:56:14.672765 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.52s 2025-10-09 09:56:14.672777 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.10s 2025-10-09 09:56:14.672788 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.10s 2025-10-09 09:56:14.672799 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-10-09 09:56:14.672815 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2025-10-09 09:56:14.672827 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-10-09 09:56:14.672838 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-10-09 09:56:15.014187 | orchestrator | + osism apply sshconfig 2025-10-09 09:56:27.304137 | orchestrator | 2025-10-09 09:56:27 | INFO  | Task a8ecf455-23f0-4050-8027-d4a069f8539a (sshconfig) was prepared for execution. 2025-10-09 09:56:27.304246 | orchestrator | 2025-10-09 09:56:27 | INFO  | It takes a moment until task a8ecf455-23f0-4050-8027-d4a069f8539a (sshconfig) has been started and output is visible here. 2025-10-09 09:56:39.820180 | orchestrator | 2025-10-09 09:56:39.820300 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-10-09 09:56:39.820318 | orchestrator | 2025-10-09 09:56:39.820330 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-10-09 09:56:39.820342 | orchestrator | Thursday 09 October 2025 09:56:31 +0000 (0:00:00.170) 0:00:00.170 ****** 2025-10-09 09:56:39.820354 | orchestrator | ok: [testbed-manager] 2025-10-09 09:56:39.820365 | orchestrator | 2025-10-09 09:56:39.820377 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-10-09 09:56:39.820388 | orchestrator | Thursday 09 October 2025 09:56:32 +0000 (0:00:00.558) 0:00:00.729 ****** 2025-10-09 09:56:39.820398 | orchestrator | changed: [testbed-manager] 2025-10-09 09:56:39.820410 | orchestrator | 2025-10-09 09:56:39.820421 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-10-09 09:56:39.820459 | orchestrator | Thursday 09 October 2025 09:56:32 +0000 (0:00:00.557) 0:00:01.286 ****** 2025-10-09 09:56:39.820471 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-10-09 09:56:39.820482 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-10-09 09:56:39.820493 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-10-09 09:56:39.820504 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-10-09 09:56:39.820514 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-10-09 09:56:39.820525 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-10-09 09:56:39.820536 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-10-09 09:56:39.820547 | orchestrator | 2025-10-09 09:56:39.820558 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-10-09 09:56:39.820568 | orchestrator | Thursday 09 October 2025 09:56:38 +0000 (0:00:06.027) 0:00:07.314 ****** 2025-10-09 09:56:39.820579 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:56:39.820590 | orchestrator | 2025-10-09 09:56:39.820600 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-10-09 09:56:39.820611 | orchestrator | Thursday 09 October 2025 09:56:38 +0000 (0:00:00.095) 0:00:07.409 ****** 2025-10-09 09:56:39.820622 | orchestrator | changed: [testbed-manager] 2025-10-09 09:56:39.820633 | orchestrator | 2025-10-09 09:56:39.820644 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 09:56:39.820656 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 09:56:39.820721 | orchestrator | 2025-10-09 09:56:39.820736 | orchestrator | 2025-10-09 09:56:39.820749 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 09:56:39.820761 | orchestrator | Thursday 09 October 2025 09:56:39 +0000 (0:00:00.663) 0:00:08.072 ****** 2025-10-09 09:56:39.820773 | orchestrator | =============================================================================== 2025-10-09 09:56:39.820786 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.03s 2025-10-09 09:56:39.820798 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.66s 2025-10-09 09:56:39.820810 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2025-10-09 09:56:39.820823 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.56s 2025-10-09 09:56:39.820835 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.10s 2025-10-09 09:56:40.157746 | orchestrator | + osism apply known-hosts 2025-10-09 09:56:52.302393 | orchestrator | 2025-10-09 09:56:52 | INFO  | Task 3a96fa9d-8262-43fc-99ba-64cbaabdc494 (known-hosts) was prepared for execution. 2025-10-09 09:56:52.302510 | orchestrator | 2025-10-09 09:56:52 | INFO  | It takes a moment until task 3a96fa9d-8262-43fc-99ba-64cbaabdc494 (known-hosts) has been started and output is visible here. 2025-10-09 09:57:09.814530 | orchestrator | 2025-10-09 09:57:09.814646 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-10-09 09:57:09.814692 | orchestrator | 2025-10-09 09:57:09.814705 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-10-09 09:57:09.814718 | orchestrator | Thursday 09 October 2025 09:56:56 +0000 (0:00:00.174) 0:00:00.174 ****** 2025-10-09 09:57:09.814730 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-10-09 09:57:09.814742 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-10-09 09:57:09.814753 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-10-09 09:57:09.814764 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-10-09 09:57:09.814775 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-10-09 09:57:09.814786 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-10-09 09:57:09.814797 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-10-09 09:57:09.814832 | orchestrator | 2025-10-09 09:57:09.814855 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-10-09 09:57:09.814867 | orchestrator | Thursday 09 October 2025 09:57:02 +0000 (0:00:06.071) 0:00:06.246 ****** 2025-10-09 09:57:09.814879 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-10-09 09:57:09.814892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-10-09 09:57:09.814903 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-10-09 09:57:09.814914 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-10-09 09:57:09.814925 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-10-09 09:57:09.814936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-10-09 09:57:09.814947 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-10-09 09:57:09.814957 | orchestrator | 2025-10-09 09:57:09.814968 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:57:09.814979 | orchestrator | Thursday 09 October 2025 09:57:02 +0000 (0:00:00.175) 0:00:06.421 ****** 2025-10-09 09:57:09.814990 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL8IOF5vl2MIbectch3JASsjHVK304V4lZHxgsbmHpCH) 2025-10-09 09:57:09.815006 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDqIdu3NrFX3mNeee7tJOIpBK/taXLGcDGB0GcBYEs6nDSEdlGA9SdXeHvmR1nGMlbCsb3rzXcn5bY548VxwIS4nYPcVLaNeuC+LG7VnnzUR8pQC1p+VjDh0gU9XcxX06uQ//IpzoD9WieKuZrh7crFvmYa+gIzwBNOtieVmYepxghzocrr2epVbkXh00gykHuRHUmZpV698BFk6BwNlRx/wEOJdWDjT8JIKtXgi8yMS9vXypHVJDFXresOq6u8XBNqBnVGsUKQH6ULpz7oKW2ySDYSbqvoM4G3redm2sgcsIInkjYhFqWamT8dB7q/ngOeZ5aUYwSFPe0+rD22sY6eukJdX/+y7HKYBBD/yLEn07hIa3o7BulNQWAPoJ8WnF5QNOfS1hkUe9vs77ws9kWEX1NjuyM5zY8Sdhp4HiYXqTm70w3mGGcAXjQ42wx2ljOIaNRvIjqxQOvE65cAK/abtQAguFH5zPISWjVUTH3DP9m5gTHFMD6VvsV8zZ6H+EU=) 2025-10-09 09:57:09.815021 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM4E6nxNCR+eiG0NHRsPOHI1Mb/tR0tUtrmdfM8DBaXUBXklZGO1kmraOrPWXB7TouHcTzrlso8GfnMxZPG7lr0=) 2025-10-09 09:57:09.815034 | orchestrator | 2025-10-09 09:57:09.815047 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:57:09.815060 | orchestrator | Thursday 09 October 2025 09:57:04 +0000 (0:00:01.209) 0:00:07.631 ****** 2025-10-09 09:57:09.815073 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAe45OIxNdEdWvHKAIASp/ZitaSa0Po9cPxJdT0XVSnc) 2025-10-09 09:57:09.815119 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCTAHMUDk1lhm69fdUN/O3Um4SIA50LP94Y8cOpQ+Wk4geNx87XKRK3ZwyJQ4jIYjMURGpFEtHOTuxAzOiVnkvMaSww3bh4Hg9HZDfz0E0Esr0Rkl1Fj3JAGvB2H0iQb78Tj7V1fcH7pbeobhdbHgLvJoCsySWeuaWYCxxyJZkxSH4YrcP29n3AMlhokf1g5uxxPms4si1J7d92BnxrgIWEdR4J4kTQCBx+WC9Welv29N7JW59aOqeSuvaDu9+Wrdgfmty1Rocd8kBvvYmtIiBsAmxN7JgF8i/rO2qzP5Jzdz6iedAz/sxYjtHCYSXT6/PUE1BuxdVdtFFCamTpqQZjn5Vtsj00v0+TSeHflhF8nwpECktssrhwLHyA7ODg09/w1M6tHZpqXeFxc7YnbczStOQ4LafZIImcURcrWffnWvnz6Fbn/XL6E2z1MS8YgtbDiENVlE0LFhEKV29GhW5Q/2dcb0BCSJI3zfSUOlCtTDHX+rJKYmoWJ08lvt4oGl0=) 2025-10-09 09:57:09.815143 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEvrIN5xhRLpCXDhBUWg/uQKn2t3Jl7TCctEEYReP+2PmyGJWywNuCCruXE6XkhWeIoBplbP9BZknBHbJtcMVTY=) 2025-10-09 09:57:09.815156 | orchestrator | 2025-10-09 09:57:09.815169 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:57:09.815182 | orchestrator | Thursday 09 October 2025 09:57:05 +0000 (0:00:01.147) 0:00:08.778 ****** 2025-10-09 09:57:09.815195 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPbdTYxd1Tegv7MjGZhMfWjAEODFtN9eLYfWVwpWgPJ4ahxh1v3do48ani36HSTjjgCRrQZMEnQD+vnmvrXsKJY=) 2025-10-09 09:57:09.815277 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCV7fi2WBpZVOqxZYJr+kEWWS/N3zUXYopu8WIZNbLaSYGaG+wBQcJdRo+PHtmhDkpaWdU63xSREko72+F6q/oaw1YsIkFfEBli8TVKWCuuFpGiK0HxbSan7yW7wavC1y/+DJoacnT7IkaTCmuXPXtctxCd7WTnicS8WnzxBLT680kJziNy6LCyD+yGD7IEPrik1yvZBP+fmJ8xdkQk1q/oUiWIeqbOYsdpyYQYoHgcNq3Ji34SQSWRARehHaJhw+RlJ6LcAGBKlVVIHi7dKq60dBoEUsIQAhwijYRzWKS8039IUFdSbaoR9uHPBp0qEX1Q8H3J0BsS9nB5Q2GaEVrVhiBak1D1h5qZMFyGU0BrYGNQzwBqmOB5ODMUYKlmnCnlZ34W51UyA6GYi7FMfJ7bQinMlRyHIaPGS+d1i8wqsfPLGXilziVGIIY4JIRMKUPx6KC1Lha+SXozONCVoPnFFZ1zBLsIV0IXsWIocgbOsIEIc1g7lGSdPqjl4VETL4s=) 2025-10-09 09:57:09.815292 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDqradUMvYFdjdc6tp63qVUQzZj1DZ7s/POGcRSZaQJ4) 2025-10-09 09:57:09.815305 | orchestrator | 2025-10-09 09:57:09.815318 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:57:09.815330 | orchestrator | Thursday 09 October 2025 09:57:06 +0000 (0:00:01.113) 0:00:09.891 ****** 2025-10-09 09:57:09.815344 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGYoG9mbbWwolrw9By4VNERQtG0i38MQb4AyPm+Z7TSN) 2025-10-09 09:57:09.815357 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0pOpWl8k4oQKlhBB4F7d/2yDWYeXbLoy5Wj47rUR2XSMr3jl2NoQU1ohdApXTr2uQWlqMPiG4rCIe9vo/3lBKN5fqvvzuwurMkT9DX8yNGeogrnJfPUmesbMVUcF8etThvzdFkmo5wQAn5r/T96ilUxN6Nl6U7hatMy71kOlgjXM/0HONutFm1LB83UP/LDvM31Uh1dprzS8YK/fGWMLtmdgjgxnyGPUI7ou8Ei3Pv38BC9V2VfldaBge2twlhdFGumkBPkVMVo6H4JhbMLzD3i3aPcfyP8BGpw4Z2D6CN0GkXYx1YNfaKj6SOhPMXqz2GIqJux44xgi2cwwl7i0hMEa2HfGTU68qZmhpEjkmOG1DKGQv+vmJP4C/YeN9Hr5zBhGR4G7M2lSwuy1nJkuortnB7bx3EborE7Zjkov1E2/V5IrqMtdlhsM826rE5nGJGeRNplgqPH+I+vNyyIGdDI2clCLqc5kHivy0pq+z61B3w5ET442e21Sx/H/QHDE=) 2025-10-09 09:57:09.815370 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAJMoAwjNfX9gN4XiTqcwNQZ6xtCtQCS65LMtkyo2gLgMsplW9ROenXelF/Wt2Ls8pXS41yr8yPh/M2JelsUSck=) 2025-10-09 09:57:09.815382 | orchestrator | 2025-10-09 09:57:09.815395 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:57:09.815406 | orchestrator | Thursday 09 October 2025 09:57:07 +0000 (0:00:01.142) 0:00:11.034 ****** 2025-10-09 09:57:09.815417 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGCdhELpFqhJmrJ15QiBrZOw81KnEcwXcArOrghqxzo6n2KriYBGdm9CZHLyamURPucTvElszcUIkoFXsGZOOYg=) 2025-10-09 09:57:09.815428 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC79+D64SKPeFeaNl5pGuYsNUelPhKormGghYKXDcPO5AYB4LAeRnXDhRPyjgtacSfX8wWO4eNnd9Qc2WnrUy3X4CXQzdEyu9zAnLCf3eI5IIqhuv49fst8WEYwds4sYtfWpuKXLMrlOV8gHHE3obbNpo9+IA96puiZwoz08nL3qV660OixT1mLZT+h1Dq3tsA1LMu3mFcP/HP23SVIUfo1Cg7YzYqr415t5ekF6eKMtQyMRvB8ojngFuK/BX0N6XTU+7Spg+uNw6KVaE1WDSomyNPGbWEPbFlX6N720q4xFQOB3car5MCk1pBf/Zwq+twrf1jSTY5F8VJVNJwOALRx8A1V2VE2704SmW5/5T/Gk/mYLCJdZsgLJStLTZOBwi3tzX0odNy9sMHFvY5Rwu7l6ndQYpSTWpxUVUi0UoEJpBNApIh9OziOSBIAlpvYIJp/kBRXHltebYbEjUd1Ar8aiTtyMn1Jha9iOkdlH39TxByiX6OIVSu1mfe1QjjvfN8=) 2025-10-09 09:57:09.815447 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPIY9tsMBWyc9Pg6wxC+56UUOupcIZxriz79Y0CpF41b) 2025-10-09 09:57:09.815458 | orchestrator | 2025-10-09 09:57:09.815469 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:57:09.815480 | orchestrator | Thursday 09 October 2025 09:57:08 +0000 (0:00:01.110) 0:00:12.144 ****** 2025-10-09 09:57:09.815499 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCRXhU937W2GKvzcXPtZu9RQmMXidPMrCP4OeZz9xsiyD04aGfSHywjI8avlQ5XExTxboR+rgf2dnjSpnQ/MWcrx8IP61zff04ZCShrT9KhsY2HQMQQrka9b79vcr2UeM/XQ1SJ8m/C/t3EWsURtSM+BloW8tMzCZ2XxNmkzpLKde+OLutgl+rKnkv1H82WOf77xLdUzMZEvP9djqVFjjnKP7QWsZHU6ge7rNiqHa7Jy6a2abQa2EHMHYyyPU9cZmDqS2CYWPE1giPHbN0vCnShaARh7gBt8RtT4fzb+ZW5dKY6+W0zacqwettRl/OZjGdLnWib9PUelagXhURsZ03QWDSH0/v2AnkGRPqwy7BSu5UTIZjIVt2orBovtRMCx2EHG/OkhYf7PI7YcNejfEC3tNcKtxaPujBMbr/eFFyYfhIwwxB+iyA9fuG2Up3DqLsgSt+nILQqlQpffQAwrFd8aRa20s8vf9uuv2s7tzCTvevq+1wZNN7CvItE1J5rTK0=) 2025-10-09 09:57:21.229350 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK7cJXe3qclt+gqb63pAWVuqUi7ShdCotw7HAUrzkeqZqSh37WQ7qpvoc/xVkA9amKfKnSQ10VVtiT9evab24m8=) 2025-10-09 09:57:21.229463 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILQrHcchVlHiLM30vD/bntUbAgo4QBRq3V/6ch25PKD1) 2025-10-09 09:57:21.229479 | orchestrator | 2025-10-09 09:57:21.229492 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:57:21.229504 | orchestrator | Thursday 09 October 2025 09:57:09 +0000 (0:00:01.200) 0:00:13.345 ****** 2025-10-09 09:57:21.229516 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYF278e8uWa1jYezjonxrJpOOg4Llqqhvhu1eWDcaOx9MGWZV61eO5MpIhBG7B1ZG75kAv050ncx36i/mK0i9u6fth+bQ8phscXjKYLH9IlC+zl571BteRo06vnaxSBD8d5TxxXG36tJu7aJ3kR996+9g/JBhSubhvFcHr8uOS2MCGAtPEUQTbGFQ6fs+Om803cp4l+kH1YvGKRmI+Rqf8yPydug3ba73mtZPyfFAN+93DxRbx9iocQepoNsXpUEkzBP0Z6RC3YQ9uJzuPs7GM7K4XhGn8eQ+AOfL/ecwZXmxIvWpeoDv2ikyBJ9ZFb28rs8GrJzhdl9Ji9qUyclKGtTZrLsOlTcFjtHQLtVStzOR6CcTlUjdfZitA0ZIyFVpAB1sy1jhINm4IXd6lZobtpTjHK1Mc9TOwUKR3x6X9K3Wk1OzbIni0OzxakrAFUnolg5EdFDd2aOm43Q4fjKcsY99KD+tfc2YPnHoXZ82bVt/JxPXIvomZMH/2mAHjBjE=) 2025-10-09 09:57:21.229528 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIzcLJqBZ2W8YMa/T1bksSyZxGl6FHIIVDmxpHgVOSTUN7aSHh9WZkaoPdgk39UeaSEpXiIawB8T2DWHiGP9EaQ=) 2025-10-09 09:57:21.229539 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHIpZpCyuiQUtTnGDO5iyNySSnPXGkhME97Den2Gc3qx) 2025-10-09 09:57:21.229548 | orchestrator | 2025-10-09 09:57:21.229558 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-10-09 09:57:21.229569 | orchestrator | Thursday 09 October 2025 09:57:10 +0000 (0:00:01.115) 0:00:14.460 ****** 2025-10-09 09:57:21.229579 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-10-09 09:57:21.229590 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-10-09 09:57:21.229617 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-10-09 09:57:21.229628 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-10-09 09:57:21.229638 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-10-09 09:57:21.229691 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-10-09 09:57:21.229709 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-10-09 09:57:21.229719 | orchestrator | 2025-10-09 09:57:21.229729 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-10-09 09:57:21.229740 | orchestrator | Thursday 09 October 2025 09:57:16 +0000 (0:00:05.574) 0:00:20.035 ****** 2025-10-09 09:57:21.229772 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-10-09 09:57:21.229784 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-10-09 09:57:21.229793 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-10-09 09:57:21.229803 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-10-09 09:57:21.229813 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-10-09 09:57:21.229822 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-10-09 09:57:21.229831 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-10-09 09:57:21.229841 | orchestrator | 2025-10-09 09:57:21.229850 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:57:21.229860 | orchestrator | Thursday 09 October 2025 09:57:16 +0000 (0:00:00.186) 0:00:20.221 ****** 2025-10-09 09:57:21.229870 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL8IOF5vl2MIbectch3JASsjHVK304V4lZHxgsbmHpCH) 2025-10-09 09:57:21.229903 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDqIdu3NrFX3mNeee7tJOIpBK/taXLGcDGB0GcBYEs6nDSEdlGA9SdXeHvmR1nGMlbCsb3rzXcn5bY548VxwIS4nYPcVLaNeuC+LG7VnnzUR8pQC1p+VjDh0gU9XcxX06uQ//IpzoD9WieKuZrh7crFvmYa+gIzwBNOtieVmYepxghzocrr2epVbkXh00gykHuRHUmZpV698BFk6BwNlRx/wEOJdWDjT8JIKtXgi8yMS9vXypHVJDFXresOq6u8XBNqBnVGsUKQH6ULpz7oKW2ySDYSbqvoM4G3redm2sgcsIInkjYhFqWamT8dB7q/ngOeZ5aUYwSFPe0+rD22sY6eukJdX/+y7HKYBBD/yLEn07hIa3o7BulNQWAPoJ8WnF5QNOfS1hkUe9vs77ws9kWEX1NjuyM5zY8Sdhp4HiYXqTm70w3mGGcAXjQ42wx2ljOIaNRvIjqxQOvE65cAK/abtQAguFH5zPISWjVUTH3DP9m5gTHFMD6VvsV8zZ6H+EU=) 2025-10-09 09:57:21.229918 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM4E6nxNCR+eiG0NHRsPOHI1Mb/tR0tUtrmdfM8DBaXUBXklZGO1kmraOrPWXB7TouHcTzrlso8GfnMxZPG7lr0=) 2025-10-09 09:57:21.229929 | orchestrator | 2025-10-09 09:57:21.229941 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:57:21.229952 | orchestrator | Thursday 09 October 2025 09:57:17 +0000 (0:00:01.122) 0:00:21.344 ****** 2025-10-09 09:57:21.229962 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEvrIN5xhRLpCXDhBUWg/uQKn2t3Jl7TCctEEYReP+2PmyGJWywNuCCruXE6XkhWeIoBplbP9BZknBHbJtcMVTY=) 2025-10-09 09:57:21.229974 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCTAHMUDk1lhm69fdUN/O3Um4SIA50LP94Y8cOpQ+Wk4geNx87XKRK3ZwyJQ4jIYjMURGpFEtHOTuxAzOiVnkvMaSww3bh4Hg9HZDfz0E0Esr0Rkl1Fj3JAGvB2H0iQb78Tj7V1fcH7pbeobhdbHgLvJoCsySWeuaWYCxxyJZkxSH4YrcP29n3AMlhokf1g5uxxPms4si1J7d92BnxrgIWEdR4J4kTQCBx+WC9Welv29N7JW59aOqeSuvaDu9+Wrdgfmty1Rocd8kBvvYmtIiBsAmxN7JgF8i/rO2qzP5Jzdz6iedAz/sxYjtHCYSXT6/PUE1BuxdVdtFFCamTpqQZjn5Vtsj00v0+TSeHflhF8nwpECktssrhwLHyA7ODg09/w1M6tHZpqXeFxc7YnbczStOQ4LafZIImcURcrWffnWvnz6Fbn/XL6E2z1MS8YgtbDiENVlE0LFhEKV29GhW5Q/2dcb0BCSJI3zfSUOlCtTDHX+rJKYmoWJ08lvt4oGl0=) 2025-10-09 09:57:21.229986 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAe45OIxNdEdWvHKAIASp/ZitaSa0Po9cPxJdT0XVSnc) 2025-10-09 09:57:21.230004 | orchestrator | 2025-10-09 09:57:21.230015 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:57:21.230070 | orchestrator | Thursday 09 October 2025 09:57:18 +0000 (0:00:01.141) 0:00:22.485 ****** 2025-10-09 09:57:21.230083 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCV7fi2WBpZVOqxZYJr+kEWWS/N3zUXYopu8WIZNbLaSYGaG+wBQcJdRo+PHtmhDkpaWdU63xSREko72+F6q/oaw1YsIkFfEBli8TVKWCuuFpGiK0HxbSan7yW7wavC1y/+DJoacnT7IkaTCmuXPXtctxCd7WTnicS8WnzxBLT680kJziNy6LCyD+yGD7IEPrik1yvZBP+fmJ8xdkQk1q/oUiWIeqbOYsdpyYQYoHgcNq3Ji34SQSWRARehHaJhw+RlJ6LcAGBKlVVIHi7dKq60dBoEUsIQAhwijYRzWKS8039IUFdSbaoR9uHPBp0qEX1Q8H3J0BsS9nB5Q2GaEVrVhiBak1D1h5qZMFyGU0BrYGNQzwBqmOB5ODMUYKlmnCnlZ34W51UyA6GYi7FMfJ7bQinMlRyHIaPGS+d1i8wqsfPLGXilziVGIIY4JIRMKUPx6KC1Lha+SXozONCVoPnFFZ1zBLsIV0IXsWIocgbOsIEIc1g7lGSdPqjl4VETL4s=) 2025-10-09 09:57:21.230094 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPbdTYxd1Tegv7MjGZhMfWjAEODFtN9eLYfWVwpWgPJ4ahxh1v3do48ani36HSTjjgCRrQZMEnQD+vnmvrXsKJY=) 2025-10-09 09:57:21.230106 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDqradUMvYFdjdc6tp63qVUQzZj1DZ7s/POGcRSZaQJ4) 2025-10-09 09:57:21.230117 | orchestrator | 2025-10-09 09:57:21.230128 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:57:21.230139 | orchestrator | Thursday 09 October 2025 09:57:20 +0000 (0:00:01.164) 0:00:23.649 ****** 2025-10-09 09:57:21.230150 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAJMoAwjNfX9gN4XiTqcwNQZ6xtCtQCS65LMtkyo2gLgMsplW9ROenXelF/Wt2Ls8pXS41yr8yPh/M2JelsUSck=) 2025-10-09 09:57:21.230168 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0pOpWl8k4oQKlhBB4F7d/2yDWYeXbLoy5Wj47rUR2XSMr3jl2NoQU1ohdApXTr2uQWlqMPiG4rCIe9vo/3lBKN5fqvvzuwurMkT9DX8yNGeogrnJfPUmesbMVUcF8etThvzdFkmo5wQAn5r/T96ilUxN6Nl6U7hatMy71kOlgjXM/0HONutFm1LB83UP/LDvM31Uh1dprzS8YK/fGWMLtmdgjgxnyGPUI7ou8Ei3Pv38BC9V2VfldaBge2twlhdFGumkBPkVMVo6H4JhbMLzD3i3aPcfyP8BGpw4Z2D6CN0GkXYx1YNfaKj6SOhPMXqz2GIqJux44xgi2cwwl7i0hMEa2HfGTU68qZmhpEjkmOG1DKGQv+vmJP4C/YeN9Hr5zBhGR4G7M2lSwuy1nJkuortnB7bx3EborE7Zjkov1E2/V5IrqMtdlhsM826rE5nGJGeRNplgqPH+I+vNyyIGdDI2clCLqc5kHivy0pq+z61B3w5ET442e21Sx/H/QHDE=) 2025-10-09 09:57:21.230191 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGYoG9mbbWwolrw9By4VNERQtG0i38MQb4AyPm+Z7TSN) 2025-10-09 09:57:25.956905 | orchestrator | 2025-10-09 09:57:25.957007 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:57:25.957023 | orchestrator | Thursday 09 October 2025 09:57:21 +0000 (0:00:01.113) 0:00:24.763 ****** 2025-10-09 09:57:25.957036 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGCdhELpFqhJmrJ15QiBrZOw81KnEcwXcArOrghqxzo6n2KriYBGdm9CZHLyamURPucTvElszcUIkoFXsGZOOYg=) 2025-10-09 09:57:25.957053 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC79+D64SKPeFeaNl5pGuYsNUelPhKormGghYKXDcPO5AYB4LAeRnXDhRPyjgtacSfX8wWO4eNnd9Qc2WnrUy3X4CXQzdEyu9zAnLCf3eI5IIqhuv49fst8WEYwds4sYtfWpuKXLMrlOV8gHHE3obbNpo9+IA96puiZwoz08nL3qV660OixT1mLZT+h1Dq3tsA1LMu3mFcP/HP23SVIUfo1Cg7YzYqr415t5ekF6eKMtQyMRvB8ojngFuK/BX0N6XTU+7Spg+uNw6KVaE1WDSomyNPGbWEPbFlX6N720q4xFQOB3car5MCk1pBf/Zwq+twrf1jSTY5F8VJVNJwOALRx8A1V2VE2704SmW5/5T/Gk/mYLCJdZsgLJStLTZOBwi3tzX0odNy9sMHFvY5Rwu7l6ndQYpSTWpxUVUi0UoEJpBNApIh9OziOSBIAlpvYIJp/kBRXHltebYbEjUd1Ar8aiTtyMn1Jha9iOkdlH39TxByiX6OIVSu1mfe1QjjvfN8=) 2025-10-09 09:57:25.957068 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPIY9tsMBWyc9Pg6wxC+56UUOupcIZxriz79Y0CpF41b) 2025-10-09 09:57:25.957080 | orchestrator | 2025-10-09 09:57:25.957092 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:57:25.957128 | orchestrator | Thursday 09 October 2025 09:57:22 +0000 (0:00:01.113) 0:00:25.877 ****** 2025-10-09 09:57:25.957140 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCRXhU937W2GKvzcXPtZu9RQmMXidPMrCP4OeZz9xsiyD04aGfSHywjI8avlQ5XExTxboR+rgf2dnjSpnQ/MWcrx8IP61zff04ZCShrT9KhsY2HQMQQrka9b79vcr2UeM/XQ1SJ8m/C/t3EWsURtSM+BloW8tMzCZ2XxNmkzpLKde+OLutgl+rKnkv1H82WOf77xLdUzMZEvP9djqVFjjnKP7QWsZHU6ge7rNiqHa7Jy6a2abQa2EHMHYyyPU9cZmDqS2CYWPE1giPHbN0vCnShaARh7gBt8RtT4fzb+ZW5dKY6+W0zacqwettRl/OZjGdLnWib9PUelagXhURsZ03QWDSH0/v2AnkGRPqwy7BSu5UTIZjIVt2orBovtRMCx2EHG/OkhYf7PI7YcNejfEC3tNcKtxaPujBMbr/eFFyYfhIwwxB+iyA9fuG2Up3DqLsgSt+nILQqlQpffQAwrFd8aRa20s8vf9uuv2s7tzCTvevq+1wZNN7CvItE1J5rTK0=) 2025-10-09 09:57:25.957165 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK7cJXe3qclt+gqb63pAWVuqUi7ShdCotw7HAUrzkeqZqSh37WQ7qpvoc/xVkA9amKfKnSQ10VVtiT9evab24m8=) 2025-10-09 09:57:25.957177 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILQrHcchVlHiLM30vD/bntUbAgo4QBRq3V/6ch25PKD1) 2025-10-09 09:57:25.957188 | orchestrator | 2025-10-09 09:57:25.957199 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-10-09 09:57:25.957210 | orchestrator | Thursday 09 October 2025 09:57:23 +0000 (0:00:01.103) 0:00:26.980 ****** 2025-10-09 09:57:25.957221 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIzcLJqBZ2W8YMa/T1bksSyZxGl6FHIIVDmxpHgVOSTUN7aSHh9WZkaoPdgk39UeaSEpXiIawB8T2DWHiGP9EaQ=) 2025-10-09 09:57:25.957232 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYF278e8uWa1jYezjonxrJpOOg4Llqqhvhu1eWDcaOx9MGWZV61eO5MpIhBG7B1ZG75kAv050ncx36i/mK0i9u6fth+bQ8phscXjKYLH9IlC+zl571BteRo06vnaxSBD8d5TxxXG36tJu7aJ3kR996+9g/JBhSubhvFcHr8uOS2MCGAtPEUQTbGFQ6fs+Om803cp4l+kH1YvGKRmI+Rqf8yPydug3ba73mtZPyfFAN+93DxRbx9iocQepoNsXpUEkzBP0Z6RC3YQ9uJzuPs7GM7K4XhGn8eQ+AOfL/ecwZXmxIvWpeoDv2ikyBJ9ZFb28rs8GrJzhdl9Ji9qUyclKGtTZrLsOlTcFjtHQLtVStzOR6CcTlUjdfZitA0ZIyFVpAB1sy1jhINm4IXd6lZobtpTjHK1Mc9TOwUKR3x6X9K3Wk1OzbIni0OzxakrAFUnolg5EdFDd2aOm43Q4fjKcsY99KD+tfc2YPnHoXZ82bVt/JxPXIvomZMH/2mAHjBjE=) 2025-10-09 09:57:25.957244 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHIpZpCyuiQUtTnGDO5iyNySSnPXGkhME97Den2Gc3qx) 2025-10-09 09:57:25.957255 | orchestrator | 2025-10-09 09:57:25.957265 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-10-09 09:57:25.957276 | orchestrator | Thursday 09 October 2025 09:57:24 +0000 (0:00:01.160) 0:00:28.141 ****** 2025-10-09 09:57:25.957288 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-10-09 09:57:25.957299 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-10-09 09:57:25.957310 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-10-09 09:57:25.957321 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-10-09 09:57:25.957332 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-10-09 09:57:25.957342 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-10-09 09:57:25.957353 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-10-09 09:57:25.957364 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:57:25.957375 | orchestrator | 2025-10-09 09:57:25.957405 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-10-09 09:57:25.957416 | orchestrator | Thursday 09 October 2025 09:57:24 +0000 (0:00:00.177) 0:00:28.319 ****** 2025-10-09 09:57:25.957427 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:57:25.957439 | orchestrator | 2025-10-09 09:57:25.957452 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-10-09 09:57:25.957464 | orchestrator | Thursday 09 October 2025 09:57:24 +0000 (0:00:00.071) 0:00:28.391 ****** 2025-10-09 09:57:25.957486 | orchestrator | skipping: [testbed-manager] 2025-10-09 09:57:25.957498 | orchestrator | 2025-10-09 09:57:25.957511 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-10-09 09:57:25.957524 | orchestrator | Thursday 09 October 2025 09:57:24 +0000 (0:00:00.060) 0:00:28.451 ****** 2025-10-09 09:57:25.957536 | orchestrator | changed: [testbed-manager] 2025-10-09 09:57:25.957549 | orchestrator | 2025-10-09 09:57:25.957561 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 09:57:25.957574 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-09 09:57:25.957588 | orchestrator | 2025-10-09 09:57:25.957600 | orchestrator | 2025-10-09 09:57:25.957613 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 09:57:25.957626 | orchestrator | Thursday 09 October 2025 09:57:25 +0000 (0:00:00.807) 0:00:29.259 ****** 2025-10-09 09:57:25.957638 | orchestrator | =============================================================================== 2025-10-09 09:57:25.957674 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.07s 2025-10-09 09:57:25.957687 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.57s 2025-10-09 09:57:25.957700 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-10-09 09:57:25.957712 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2025-10-09 09:57:25.957725 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-10-09 09:57:25.957737 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-10-09 09:57:25.957750 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-10-09 09:57:25.957762 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-10-09 09:57:25.957775 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-10-09 09:57:25.957787 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-10-09 09:57:25.957799 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-10-09 09:57:25.957809 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-10-09 09:57:25.957820 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-10-09 09:57:25.957831 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-10-09 09:57:25.957850 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-10-09 09:57:25.957861 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-10-09 09:57:25.957872 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.81s 2025-10-09 09:57:25.957882 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2025-10-09 09:57:25.957893 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2025-10-09 09:57:25.957904 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-10-09 09:57:26.296845 | orchestrator | + osism apply squid 2025-10-09 09:57:38.430182 | orchestrator | 2025-10-09 09:57:38 | INFO  | Task f865acab-dc84-4959-b9f1-341f7dd8bab0 (squid) was prepared for execution. 2025-10-09 09:57:38.430302 | orchestrator | 2025-10-09 09:57:38 | INFO  | It takes a moment until task f865acab-dc84-4959-b9f1-341f7dd8bab0 (squid) has been started and output is visible here. 2025-10-09 09:59:35.662189 | orchestrator | 2025-10-09 09:59:35.662304 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-10-09 09:59:35.662320 | orchestrator | 2025-10-09 09:59:35.662332 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-10-09 09:59:35.662343 | orchestrator | Thursday 09 October 2025 09:57:42 +0000 (0:00:00.168) 0:00:00.168 ****** 2025-10-09 09:59:35.662378 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-10-09 09:59:35.662390 | orchestrator | 2025-10-09 09:59:35.662402 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-10-09 09:59:35.662413 | orchestrator | Thursday 09 October 2025 09:57:42 +0000 (0:00:00.085) 0:00:00.253 ****** 2025-10-09 09:59:35.662424 | orchestrator | ok: [testbed-manager] 2025-10-09 09:59:35.662436 | orchestrator | 2025-10-09 09:59:35.662447 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-10-09 09:59:35.662457 | orchestrator | Thursday 09 October 2025 09:57:44 +0000 (0:00:01.594) 0:00:01.847 ****** 2025-10-09 09:59:35.662469 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-10-09 09:59:35.662480 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-10-09 09:59:35.662491 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-10-09 09:59:35.662502 | orchestrator | 2025-10-09 09:59:35.662512 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-10-09 09:59:35.662523 | orchestrator | Thursday 09 October 2025 09:57:45 +0000 (0:00:01.216) 0:00:03.063 ****** 2025-10-09 09:59:35.662534 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-10-09 09:59:35.662545 | orchestrator | 2025-10-09 09:59:35.662555 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-10-09 09:59:35.662566 | orchestrator | Thursday 09 October 2025 09:57:46 +0000 (0:00:01.137) 0:00:04.201 ****** 2025-10-09 09:59:35.662577 | orchestrator | ok: [testbed-manager] 2025-10-09 09:59:35.662615 | orchestrator | 2025-10-09 09:59:35.662626 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-10-09 09:59:35.662637 | orchestrator | Thursday 09 October 2025 09:57:47 +0000 (0:00:00.366) 0:00:04.568 ****** 2025-10-09 09:59:35.662648 | orchestrator | changed: [testbed-manager] 2025-10-09 09:59:35.662659 | orchestrator | 2025-10-09 09:59:35.662670 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-10-09 09:59:35.662680 | orchestrator | Thursday 09 October 2025 09:57:48 +0000 (0:00:00.973) 0:00:05.541 ****** 2025-10-09 09:59:35.662691 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-10-09 09:59:35.662702 | orchestrator | ok: [testbed-manager] 2025-10-09 09:59:35.662713 | orchestrator | 2025-10-09 09:59:35.662724 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-10-09 09:59:35.662734 | orchestrator | Thursday 09 October 2025 09:58:20 +0000 (0:00:32.065) 0:00:37.606 ****** 2025-10-09 09:59:35.662745 | orchestrator | changed: [testbed-manager] 2025-10-09 09:59:35.662755 | orchestrator | 2025-10-09 09:59:35.662766 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-10-09 09:59:35.662777 | orchestrator | Thursday 09 October 2025 09:58:34 +0000 (0:00:14.294) 0:00:51.900 ****** 2025-10-09 09:59:35.662787 | orchestrator | Pausing for 60 seconds 2025-10-09 09:59:35.662799 | orchestrator | changed: [testbed-manager] 2025-10-09 09:59:35.662810 | orchestrator | 2025-10-09 09:59:35.662820 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-10-09 09:59:35.662831 | orchestrator | Thursday 09 October 2025 09:59:34 +0000 (0:01:00.099) 0:01:52.000 ****** 2025-10-09 09:59:35.662842 | orchestrator | ok: [testbed-manager] 2025-10-09 09:59:35.662852 | orchestrator | 2025-10-09 09:59:35.662863 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-10-09 09:59:35.662874 | orchestrator | Thursday 09 October 2025 09:59:34 +0000 (0:00:00.071) 0:01:52.072 ****** 2025-10-09 09:59:35.662884 | orchestrator | changed: [testbed-manager] 2025-10-09 09:59:35.662895 | orchestrator | 2025-10-09 09:59:35.662906 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 09:59:35.662916 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 09:59:35.662935 | orchestrator | 2025-10-09 09:59:35.662946 | orchestrator | 2025-10-09 09:59:35.662956 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 09:59:35.662967 | orchestrator | Thursday 09 October 2025 09:59:35 +0000 (0:00:00.724) 0:01:52.796 ****** 2025-10-09 09:59:35.662978 | orchestrator | =============================================================================== 2025-10-09 09:59:35.662988 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.10s 2025-10-09 09:59:35.663000 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.07s 2025-10-09 09:59:35.663010 | orchestrator | osism.services.squid : Restart squid service --------------------------- 14.29s 2025-10-09 09:59:35.663021 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.59s 2025-10-09 09:59:35.663032 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.22s 2025-10-09 09:59:35.663042 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.14s 2025-10-09 09:59:35.663053 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.97s 2025-10-09 09:59:35.663063 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.72s 2025-10-09 09:59:35.663074 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-10-09 09:59:35.663085 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-10-09 09:59:35.663095 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-10-09 09:59:36.009414 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-10-09 09:59:36.009745 | orchestrator | ++ semver latest 9.0.0 2025-10-09 09:59:36.065429 | orchestrator | + [[ -1 -lt 0 ]] 2025-10-09 09:59:36.065485 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-10-09 09:59:36.065915 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-10-09 09:59:48.212104 | orchestrator | 2025-10-09 09:59:48 | INFO  | Task c8d88ba7-a426-4934-8978-f697a6dfcd06 (operator) was prepared for execution. 2025-10-09 09:59:48.212217 | orchestrator | 2025-10-09 09:59:48 | INFO  | It takes a moment until task c8d88ba7-a426-4934-8978-f697a6dfcd06 (operator) has been started and output is visible here. 2025-10-09 10:00:04.734084 | orchestrator | 2025-10-09 10:00:04.734203 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-10-09 10:00:04.734220 | orchestrator | 2025-10-09 10:00:04.734234 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-09 10:00:04.734246 | orchestrator | Thursday 09 October 2025 09:59:52 +0000 (0:00:00.148) 0:00:00.148 ****** 2025-10-09 10:00:04.734257 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:04.734270 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:04.734281 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:04.734292 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:04.734303 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:04.734314 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:04.734324 | orchestrator | 2025-10-09 10:00:04.734336 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-10-09 10:00:04.734347 | orchestrator | Thursday 09 October 2025 09:59:55 +0000 (0:00:03.347) 0:00:03.495 ****** 2025-10-09 10:00:04.734358 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:04.734369 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:04.734380 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:04.734391 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:04.734402 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:04.734412 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:04.734427 | orchestrator | 2025-10-09 10:00:04.734439 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-10-09 10:00:04.734450 | orchestrator | 2025-10-09 10:00:04.734461 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-10-09 10:00:04.734472 | orchestrator | Thursday 09 October 2025 09:59:56 +0000 (0:00:00.847) 0:00:04.343 ****** 2025-10-09 10:00:04.734483 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:04.734521 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:04.734533 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:04.734544 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:04.734555 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:04.734607 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:04.734622 | orchestrator | 2025-10-09 10:00:04.734635 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-10-09 10:00:04.734648 | orchestrator | Thursday 09 October 2025 09:59:56 +0000 (0:00:00.182) 0:00:04.526 ****** 2025-10-09 10:00:04.734661 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:00:04.734673 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:00:04.734686 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:00:04.734698 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:00:04.734711 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:00:04.734724 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:00:04.734737 | orchestrator | 2025-10-09 10:00:04.734750 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-10-09 10:00:04.734763 | orchestrator | Thursday 09 October 2025 09:59:57 +0000 (0:00:00.224) 0:00:04.750 ****** 2025-10-09 10:00:04.734776 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:00:04.734790 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:00:04.734803 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:00:04.734816 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:00:04.734829 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:00:04.734842 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:00:04.734854 | orchestrator | 2025-10-09 10:00:04.734885 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-10-09 10:00:04.734898 | orchestrator | Thursday 09 October 2025 09:59:57 +0000 (0:00:00.637) 0:00:05.387 ****** 2025-10-09 10:00:04.734912 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:00:04.734923 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:00:04.734934 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:00:04.734945 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:00:04.734956 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:00:04.734966 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:00:04.734977 | orchestrator | 2025-10-09 10:00:04.734988 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-10-09 10:00:04.734999 | orchestrator | Thursday 09 October 2025 09:59:58 +0000 (0:00:00.890) 0:00:06.278 ****** 2025-10-09 10:00:04.735011 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-10-09 10:00:04.735022 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-10-09 10:00:04.735033 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-10-09 10:00:04.735050 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-10-09 10:00:04.735061 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-10-09 10:00:04.735072 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-10-09 10:00:04.735083 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-10-09 10:00:04.735094 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-10-09 10:00:04.735105 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-10-09 10:00:04.735116 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-10-09 10:00:04.735127 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-10-09 10:00:04.735137 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-10-09 10:00:04.735148 | orchestrator | 2025-10-09 10:00:04.735159 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-10-09 10:00:04.735170 | orchestrator | Thursday 09 October 2025 09:59:59 +0000 (0:00:01.221) 0:00:07.500 ****** 2025-10-09 10:00:04.735181 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:00:04.735191 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:00:04.735202 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:00:04.735213 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:00:04.735224 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:00:04.735234 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:00:04.735254 | orchestrator | 2025-10-09 10:00:04.735265 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-10-09 10:00:04.735276 | orchestrator | Thursday 09 October 2025 10:00:01 +0000 (0:00:01.294) 0:00:08.795 ****** 2025-10-09 10:00:04.735287 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-10-09 10:00:04.735298 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-10-09 10:00:04.735309 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-10-09 10:00:04.735320 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-10-09 10:00:04.735350 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-10-09 10:00:04.735362 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-10-09 10:00:04.735373 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-10-09 10:00:04.735383 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-10-09 10:00:04.735394 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-10-09 10:00:04.735404 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-10-09 10:00:04.735415 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-10-09 10:00:04.735426 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-10-09 10:00:04.735436 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-10-09 10:00:04.735447 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-10-09 10:00:04.735457 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-10-09 10:00:04.735468 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-10-09 10:00:04.735479 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-10-09 10:00:04.735489 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-10-09 10:00:04.735500 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-10-09 10:00:04.735511 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-10-09 10:00:04.735522 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-10-09 10:00:04.735533 | orchestrator | 2025-10-09 10:00:04.735543 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-10-09 10:00:04.735555 | orchestrator | Thursday 09 October 2025 10:00:02 +0000 (0:00:01.279) 0:00:10.074 ****** 2025-10-09 10:00:04.735583 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:00:04.735595 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:00:04.735606 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:00:04.735617 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:00:04.735627 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:00:04.735638 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:00:04.735649 | orchestrator | 2025-10-09 10:00:04.735660 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-10-09 10:00:04.735671 | orchestrator | Thursday 09 October 2025 10:00:02 +0000 (0:00:00.214) 0:00:10.289 ****** 2025-10-09 10:00:04.735681 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:00:04.735692 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:00:04.735703 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:00:04.735714 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:00:04.735724 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:00:04.735735 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:00:04.735746 | orchestrator | 2025-10-09 10:00:04.735758 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-10-09 10:00:04.735769 | orchestrator | Thursday 09 October 2025 10:00:03 +0000 (0:00:00.546) 0:00:10.836 ****** 2025-10-09 10:00:04.735779 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:00:04.735790 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:00:04.735808 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:00:04.735819 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:00:04.735829 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:00:04.735840 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:00:04.735851 | orchestrator | 2025-10-09 10:00:04.735862 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-10-09 10:00:04.735873 | orchestrator | Thursday 09 October 2025 10:00:03 +0000 (0:00:00.217) 0:00:11.053 ****** 2025-10-09 10:00:04.735884 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-09 10:00:04.735895 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-10-09 10:00:04.735906 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-10-09 10:00:04.735917 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-10-09 10:00:04.735928 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:00:04.735938 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:00:04.735949 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:00:04.735960 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:00:04.735971 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-10-09 10:00:04.735981 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:00:04.735992 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-10-09 10:00:04.736003 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:00:04.736014 | orchestrator | 2025-10-09 10:00:04.736025 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-10-09 10:00:04.736036 | orchestrator | Thursday 09 October 2025 10:00:04 +0000 (0:00:00.777) 0:00:11.830 ****** 2025-10-09 10:00:04.736047 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:00:04.736058 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:00:04.736068 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:00:04.736079 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:00:04.736090 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:00:04.736101 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:00:04.736111 | orchestrator | 2025-10-09 10:00:04.736122 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-10-09 10:00:04.736133 | orchestrator | Thursday 09 October 2025 10:00:04 +0000 (0:00:00.162) 0:00:11.993 ****** 2025-10-09 10:00:04.736143 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:00:04.736154 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:00:04.736165 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:00:04.736175 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:00:04.736186 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:00:04.736197 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:00:04.736207 | orchestrator | 2025-10-09 10:00:04.736218 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-10-09 10:00:04.736229 | orchestrator | Thursday 09 October 2025 10:00:04 +0000 (0:00:00.164) 0:00:12.158 ****** 2025-10-09 10:00:04.736240 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:00:04.736251 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:00:04.736262 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:00:04.736272 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:00:04.736290 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:00:06.008351 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:00:06.008438 | orchestrator | 2025-10-09 10:00:06.008454 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-10-09 10:00:06.008466 | orchestrator | Thursday 09 October 2025 10:00:04 +0000 (0:00:00.179) 0:00:12.337 ****** 2025-10-09 10:00:06.008477 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:00:06.008488 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:00:06.008499 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:00:06.008509 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:00:06.008520 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:00:06.008532 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:00:06.008543 | orchestrator | 2025-10-09 10:00:06.008555 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-10-09 10:00:06.008646 | orchestrator | Thursday 09 October 2025 10:00:05 +0000 (0:00:00.713) 0:00:13.051 ****** 2025-10-09 10:00:06.008661 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:00:06.008672 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:00:06.008682 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:00:06.008693 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:00:06.008704 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:00:06.008715 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:00:06.008726 | orchestrator | 2025-10-09 10:00:06.008737 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:00:06.008749 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:00:06.008761 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:00:06.008772 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:00:06.008783 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:00:06.008794 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:00:06.008805 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:00:06.008815 | orchestrator | 2025-10-09 10:00:06.008826 | orchestrator | 2025-10-09 10:00:06.008837 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:00:06.008863 | orchestrator | Thursday 09 October 2025 10:00:05 +0000 (0:00:00.265) 0:00:13.316 ****** 2025-10-09 10:00:06.008875 | orchestrator | =============================================================================== 2025-10-09 10:00:06.008887 | orchestrator | Gathering Facts --------------------------------------------------------- 3.35s 2025-10-09 10:00:06.008900 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.29s 2025-10-09 10:00:06.008914 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.28s 2025-10-09 10:00:06.008928 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.22s 2025-10-09 10:00:06.008941 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.89s 2025-10-09 10:00:06.008953 | orchestrator | Do not require tty for all users ---------------------------------------- 0.85s 2025-10-09 10:00:06.008973 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.78s 2025-10-09 10:00:06.008986 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.71s 2025-10-09 10:00:06.008999 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.64s 2025-10-09 10:00:06.009019 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.55s 2025-10-09 10:00:06.009038 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.27s 2025-10-09 10:00:06.009057 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.22s 2025-10-09 10:00:06.009075 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.22s 2025-10-09 10:00:06.009093 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.21s 2025-10-09 10:00:06.009112 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2025-10-09 10:00:06.009131 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2025-10-09 10:00:06.009151 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-10-09 10:00:06.009174 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2025-10-09 10:00:06.347277 | orchestrator | + osism apply --environment custom facts 2025-10-09 10:00:08.592171 | orchestrator | 2025-10-09 10:00:08 | INFO  | Trying to run play facts in environment custom 2025-10-09 10:00:18.708245 | orchestrator | 2025-10-09 10:00:18 | INFO  | Task a0acc46c-3e92-4f52-9d69-027fcd817569 (facts) was prepared for execution. 2025-10-09 10:00:18.708356 | orchestrator | 2025-10-09 10:00:18 | INFO  | It takes a moment until task a0acc46c-3e92-4f52-9d69-027fcd817569 (facts) has been started and output is visible here. 2025-10-09 10:01:05.684874 | orchestrator | 2025-10-09 10:01:05.684999 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-10-09 10:01:05.685016 | orchestrator | 2025-10-09 10:01:05.685029 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-10-09 10:01:05.685041 | orchestrator | Thursday 09 October 2025 10:00:22 +0000 (0:00:00.089) 0:00:00.089 ****** 2025-10-09 10:01:05.685052 | orchestrator | ok: [testbed-manager] 2025-10-09 10:01:05.685065 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:01:05.685077 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:01:05.685088 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:01:05.685099 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:01:05.685110 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:01:05.685121 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:01:05.685132 | orchestrator | 2025-10-09 10:01:05.685143 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-10-09 10:01:05.685154 | orchestrator | Thursday 09 October 2025 10:00:24 +0000 (0:00:01.500) 0:00:01.590 ****** 2025-10-09 10:01:05.685164 | orchestrator | ok: [testbed-manager] 2025-10-09 10:01:05.685175 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:01:05.685186 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:01:05.685197 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:01:05.685208 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:01:05.685219 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:01:05.685230 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:01:05.685240 | orchestrator | 2025-10-09 10:01:05.685251 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-10-09 10:01:05.685262 | orchestrator | 2025-10-09 10:01:05.685273 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-10-09 10:01:05.685284 | orchestrator | Thursday 09 October 2025 10:00:25 +0000 (0:00:01.260) 0:00:02.851 ****** 2025-10-09 10:01:05.685295 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:05.685306 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:01:05.685317 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:05.685328 | orchestrator | 2025-10-09 10:01:05.685339 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-10-09 10:01:05.685351 | orchestrator | Thursday 09 October 2025 10:00:25 +0000 (0:00:00.123) 0:00:02.974 ****** 2025-10-09 10:01:05.685361 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:05.685372 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:01:05.685383 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:05.685393 | orchestrator | 2025-10-09 10:01:05.685407 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-10-09 10:01:05.685419 | orchestrator | Thursday 09 October 2025 10:00:26 +0000 (0:00:00.327) 0:00:03.302 ****** 2025-10-09 10:01:05.685432 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:05.685445 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:01:05.685458 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:05.685470 | orchestrator | 2025-10-09 10:01:05.685483 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-10-09 10:01:05.685496 | orchestrator | Thursday 09 October 2025 10:00:26 +0000 (0:00:00.221) 0:00:03.524 ****** 2025-10-09 10:01:05.685510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:01:05.685581 | orchestrator | 2025-10-09 10:01:05.685596 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-10-09 10:01:05.685610 | orchestrator | Thursday 09 October 2025 10:00:26 +0000 (0:00:00.154) 0:00:03.678 ****** 2025-10-09 10:01:05.685624 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:05.685635 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:01:05.685645 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:05.685656 | orchestrator | 2025-10-09 10:01:05.685667 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-10-09 10:01:05.685678 | orchestrator | Thursday 09 October 2025 10:00:27 +0000 (0:00:00.471) 0:00:04.149 ****** 2025-10-09 10:01:05.685689 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:01:05.685700 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:01:05.685711 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:01:05.685722 | orchestrator | 2025-10-09 10:01:05.685747 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-10-09 10:01:05.685759 | orchestrator | Thursday 09 October 2025 10:00:27 +0000 (0:00:00.178) 0:00:04.328 ****** 2025-10-09 10:01:05.685770 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:01:05.685781 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:01:05.685792 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:01:05.685803 | orchestrator | 2025-10-09 10:01:05.685814 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-10-09 10:01:05.685825 | orchestrator | Thursday 09 October 2025 10:00:28 +0000 (0:00:01.128) 0:00:05.457 ****** 2025-10-09 10:01:05.685836 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:05.685847 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:01:05.685858 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:05.685869 | orchestrator | 2025-10-09 10:01:05.685880 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-10-09 10:01:05.685890 | orchestrator | Thursday 09 October 2025 10:00:28 +0000 (0:00:00.496) 0:00:05.953 ****** 2025-10-09 10:01:05.685901 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:01:05.685912 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:01:05.685923 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:01:05.685934 | orchestrator | 2025-10-09 10:01:05.685945 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-10-09 10:01:05.685956 | orchestrator | Thursday 09 October 2025 10:00:29 +0000 (0:00:01.126) 0:00:07.079 ****** 2025-10-09 10:01:05.685967 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:01:05.685978 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:01:05.685989 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:01:05.686000 | orchestrator | 2025-10-09 10:01:05.686011 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-10-09 10:01:05.686074 | orchestrator | Thursday 09 October 2025 10:00:47 +0000 (0:00:18.036) 0:00:25.116 ****** 2025-10-09 10:01:05.686086 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:01:05.686097 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:01:05.686108 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:01:05.686119 | orchestrator | 2025-10-09 10:01:05.686130 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-10-09 10:01:05.686158 | orchestrator | Thursday 09 October 2025 10:00:48 +0000 (0:00:00.112) 0:00:25.229 ****** 2025-10-09 10:01:05.686170 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:01:05.686181 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:01:05.686192 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:01:05.686203 | orchestrator | 2025-10-09 10:01:05.686214 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-10-09 10:01:05.686224 | orchestrator | Thursday 09 October 2025 10:00:56 +0000 (0:00:08.046) 0:00:33.275 ****** 2025-10-09 10:01:05.686235 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:05.686246 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:05.686257 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:01:05.686268 | orchestrator | 2025-10-09 10:01:05.686279 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-10-09 10:01:05.686299 | orchestrator | Thursday 09 October 2025 10:00:56 +0000 (0:00:00.536) 0:00:33.811 ****** 2025-10-09 10:01:05.686311 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-10-09 10:01:05.686321 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-10-09 10:01:05.686332 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-10-09 10:01:05.686343 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-10-09 10:01:05.686353 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-10-09 10:01:05.686364 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-10-09 10:01:05.686375 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-10-09 10:01:05.686386 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-10-09 10:01:05.686396 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-10-09 10:01:05.686407 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-10-09 10:01:05.686418 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-10-09 10:01:05.686428 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-10-09 10:01:05.686439 | orchestrator | 2025-10-09 10:01:05.686450 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-10-09 10:01:05.686461 | orchestrator | Thursday 09 October 2025 10:01:00 +0000 (0:00:03.711) 0:00:37.522 ****** 2025-10-09 10:01:05.686471 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:01:05.686482 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:05.686493 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:05.686503 | orchestrator | 2025-10-09 10:01:05.686514 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-10-09 10:01:05.686525 | orchestrator | 2025-10-09 10:01:05.686564 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-10-09 10:01:05.686577 | orchestrator | Thursday 09 October 2025 10:01:01 +0000 (0:00:01.413) 0:00:38.936 ****** 2025-10-09 10:01:05.686588 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:01:05.686599 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:01:05.686610 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:01:05.686620 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:05.686631 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:01:05.686642 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:05.686653 | orchestrator | ok: [testbed-manager] 2025-10-09 10:01:05.686664 | orchestrator | 2025-10-09 10:01:05.686674 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:01:05.686686 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:01:05.686698 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:01:05.686711 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:01:05.686722 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:01:05.686733 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:01:05.686783 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:01:05.686796 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:01:05.686814 | orchestrator | 2025-10-09 10:01:05.686826 | orchestrator | 2025-10-09 10:01:05.686837 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:01:05.686848 | orchestrator | Thursday 09 October 2025 10:01:05 +0000 (0:00:03.846) 0:00:42.782 ****** 2025-10-09 10:01:05.686859 | orchestrator | =============================================================================== 2025-10-09 10:01:05.686870 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.04s 2025-10-09 10:01:05.686881 | orchestrator | Install required packages (Debian) -------------------------------------- 8.05s 2025-10-09 10:01:05.686892 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.85s 2025-10-09 10:01:05.686902 | orchestrator | Copy fact files --------------------------------------------------------- 3.71s 2025-10-09 10:01:05.686913 | orchestrator | Create custom facts directory ------------------------------------------- 1.50s 2025-10-09 10:01:05.686924 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.41s 2025-10-09 10:01:05.686943 | orchestrator | Copy fact file ---------------------------------------------------------- 1.26s 2025-10-09 10:01:05.962129 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.13s 2025-10-09 10:01:05.962229 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.13s 2025-10-09 10:01:05.962243 | orchestrator | Create custom facts directory ------------------------------------------- 0.54s 2025-10-09 10:01:05.962255 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.50s 2025-10-09 10:01:05.962266 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.47s 2025-10-09 10:01:05.962277 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.33s 2025-10-09 10:01:05.962288 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2025-10-09 10:01:05.962299 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.18s 2025-10-09 10:01:05.962310 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-10-09 10:01:05.962322 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-10-09 10:01:05.962333 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-10-09 10:01:06.340993 | orchestrator | + osism apply bootstrap 2025-10-09 10:01:18.519926 | orchestrator | 2025-10-09 10:01:18 | INFO  | Task aa4c50cf-1297-49e2-84cb-168078b61a14 (bootstrap) was prepared for execution. 2025-10-09 10:01:18.520043 | orchestrator | 2025-10-09 10:01:18 | INFO  | It takes a moment until task aa4c50cf-1297-49e2-84cb-168078b61a14 (bootstrap) has been started and output is visible here. 2025-10-09 10:01:36.429767 | orchestrator | 2025-10-09 10:01:36.429903 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-10-09 10:01:36.429921 | orchestrator | 2025-10-09 10:01:36.429941 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-10-09 10:01:36.429954 | orchestrator | Thursday 09 October 2025 10:01:23 +0000 (0:00:00.160) 0:00:00.160 ****** 2025-10-09 10:01:36.429966 | orchestrator | ok: [testbed-manager] 2025-10-09 10:01:36.429978 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:01:36.429990 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:01:36.430001 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:01:36.430062 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:36.430075 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:01:36.430086 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:36.430097 | orchestrator | 2025-10-09 10:01:36.430108 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-10-09 10:01:36.430119 | orchestrator | 2025-10-09 10:01:36.430130 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-10-09 10:01:36.430142 | orchestrator | Thursday 09 October 2025 10:01:23 +0000 (0:00:00.282) 0:00:00.443 ****** 2025-10-09 10:01:36.430152 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:01:36.430163 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:01:36.430200 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:01:36.430211 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:36.430222 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:36.430232 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:01:36.430243 | orchestrator | ok: [testbed-manager] 2025-10-09 10:01:36.430253 | orchestrator | 2025-10-09 10:01:36.430264 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-10-09 10:01:36.430275 | orchestrator | 2025-10-09 10:01:36.430285 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-10-09 10:01:36.430298 | orchestrator | Thursday 09 October 2025 10:01:28 +0000 (0:00:04.916) 0:00:05.360 ****** 2025-10-09 10:01:36.430312 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-10-09 10:01:36.430325 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-10-09 10:01:36.430338 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-10-09 10:01:36.430365 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-10-09 10:01:36.430378 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-10-09 10:01:36.430390 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-10-09 10:01:36.430403 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-10-09 10:01:36.430415 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-10-09 10:01:36.430427 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-10-09 10:01:36.430440 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-10-09 10:01:36.430452 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-10-09 10:01:36.430465 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-10-09 10:01:36.430479 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-10-09 10:01:36.430491 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-10-09 10:01:36.430503 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-10-09 10:01:36.430516 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-10-09 10:01:36.430552 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-10-09 10:01:36.430565 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-10-09 10:01:36.430577 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-10-09 10:01:36.430590 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-10-09 10:01:36.430602 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-10-09 10:01:36.430615 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-10-09 10:01:36.430627 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:01:36.430640 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-10-09 10:01:36.430652 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-10-09 10:01:36.430662 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-10-09 10:01:36.430673 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-10-09 10:01:36.430683 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-10-09 10:01:36.430694 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:01:36.430704 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-10-09 10:01:36.430715 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-10-09 10:01:36.430725 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-10-09 10:01:36.430736 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-10-09 10:01:36.430746 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-10-09 10:01:36.430757 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-10-09 10:01:36.430767 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:01:36.430778 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-10-09 10:01:36.430796 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-10-09 10:01:36.430806 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-10-09 10:01:36.430817 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-10-09 10:01:36.430827 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-10-09 10:01:36.430838 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-10-09 10:01:36.430848 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:01:36.430859 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-10-09 10:01:36.430870 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-10-09 10:01:36.430880 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-10-09 10:01:36.430891 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:01:36.430920 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-10-09 10:01:36.430931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:01:36.430942 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-10-09 10:01:36.430952 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:01:36.430963 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-10-09 10:01:36.430973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:01:36.430984 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:01:36.430995 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-10-09 10:01:36.431006 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:01:36.431016 | orchestrator | 2025-10-09 10:01:36.431027 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-10-09 10:01:36.431038 | orchestrator | 2025-10-09 10:01:36.431049 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-10-09 10:01:36.431059 | orchestrator | Thursday 09 October 2025 10:01:28 +0000 (0:00:00.507) 0:00:05.867 ****** 2025-10-09 10:01:36.431070 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:01:36.431081 | orchestrator | ok: [testbed-manager] 2025-10-09 10:01:36.431092 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:01:36.431102 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:36.431113 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:01:36.431123 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:36.431134 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:01:36.431144 | orchestrator | 2025-10-09 10:01:36.431155 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-10-09 10:01:36.431166 | orchestrator | Thursday 09 October 2025 10:01:30 +0000 (0:00:01.282) 0:00:07.150 ****** 2025-10-09 10:01:36.431177 | orchestrator | ok: [testbed-manager] 2025-10-09 10:01:36.431187 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:01:36.431198 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:01:36.431208 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:01:36.431218 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:01:36.431229 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:36.431239 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:36.431250 | orchestrator | 2025-10-09 10:01:36.431261 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-10-09 10:01:36.431271 | orchestrator | Thursday 09 October 2025 10:01:31 +0000 (0:00:01.263) 0:00:08.413 ****** 2025-10-09 10:01:36.431283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:01:36.431296 | orchestrator | 2025-10-09 10:01:36.431307 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-10-09 10:01:36.431318 | orchestrator | Thursday 09 October 2025 10:01:31 +0000 (0:00:00.293) 0:00:08.706 ****** 2025-10-09 10:01:36.431329 | orchestrator | changed: [testbed-manager] 2025-10-09 10:01:36.431339 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:01:36.431350 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:01:36.431367 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:01:36.431377 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:01:36.431388 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:01:36.431398 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:01:36.431409 | orchestrator | 2025-10-09 10:01:36.431420 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-10-09 10:01:36.431430 | orchestrator | Thursday 09 October 2025 10:01:33 +0000 (0:00:02.120) 0:00:10.827 ****** 2025-10-09 10:01:36.431441 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:01:36.431453 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:01:36.431466 | orchestrator | 2025-10-09 10:01:36.431477 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-10-09 10:01:36.431487 | orchestrator | Thursday 09 October 2025 10:01:34 +0000 (0:00:00.292) 0:00:11.120 ****** 2025-10-09 10:01:36.431498 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:01:36.431508 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:01:36.431519 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:01:36.431545 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:01:36.431556 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:01:36.431566 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:01:36.431577 | orchestrator | 2025-10-09 10:01:36.431588 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-10-09 10:01:36.431599 | orchestrator | Thursday 09 October 2025 10:01:35 +0000 (0:00:01.066) 0:00:12.186 ****** 2025-10-09 10:01:36.431609 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:01:36.431619 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:01:36.431630 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:01:36.431640 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:01:36.431651 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:01:36.431662 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:01:36.431672 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:01:36.431683 | orchestrator | 2025-10-09 10:01:36.431694 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-10-09 10:01:36.431704 | orchestrator | Thursday 09 October 2025 10:01:35 +0000 (0:00:00.722) 0:00:12.908 ****** 2025-10-09 10:01:36.431715 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:01:36.431726 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:01:36.431736 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:01:36.431747 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:01:36.431757 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:01:36.431768 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:01:36.431778 | orchestrator | ok: [testbed-manager] 2025-10-09 10:01:36.431789 | orchestrator | 2025-10-09 10:01:36.431800 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-10-09 10:01:36.431811 | orchestrator | Thursday 09 October 2025 10:01:36 +0000 (0:00:00.459) 0:00:13.368 ****** 2025-10-09 10:01:36.431822 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:01:36.431840 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:01:36.431857 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:01:49.591352 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:01:49.591461 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:01:49.591478 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:01:49.591490 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:01:49.591501 | orchestrator | 2025-10-09 10:01:49.591514 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-10-09 10:01:49.591579 | orchestrator | Thursday 09 October 2025 10:01:36 +0000 (0:00:00.258) 0:00:13.626 ****** 2025-10-09 10:01:49.591593 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:01:49.591645 | orchestrator | 2025-10-09 10:01:49.591658 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-10-09 10:01:49.591670 | orchestrator | Thursday 09 October 2025 10:01:36 +0000 (0:00:00.339) 0:00:13.965 ****** 2025-10-09 10:01:49.591681 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:01:49.591693 | orchestrator | 2025-10-09 10:01:49.591704 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-10-09 10:01:49.591715 | orchestrator | Thursday 09 October 2025 10:01:37 +0000 (0:00:00.376) 0:00:14.342 ****** 2025-10-09 10:01:49.591726 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:01:49.591738 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:01:49.591749 | orchestrator | ok: [testbed-manager] 2025-10-09 10:01:49.591759 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:01:49.591770 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:49.591780 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:01:49.591812 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:49.591824 | orchestrator | 2025-10-09 10:01:49.591834 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-10-09 10:01:49.591845 | orchestrator | Thursday 09 October 2025 10:01:38 +0000 (0:00:01.475) 0:00:15.818 ****** 2025-10-09 10:01:49.591856 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:01:49.591867 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:01:49.591877 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:01:49.591888 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:01:49.591898 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:01:49.591909 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:01:49.591919 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:01:49.591930 | orchestrator | 2025-10-09 10:01:49.591941 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-10-09 10:01:49.591951 | orchestrator | Thursday 09 October 2025 10:01:38 +0000 (0:00:00.250) 0:00:16.068 ****** 2025-10-09 10:01:49.591962 | orchestrator | ok: [testbed-manager] 2025-10-09 10:01:49.591973 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:01:49.591983 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:01:49.591994 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:01:49.592004 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:01:49.592015 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:49.592025 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:49.592036 | orchestrator | 2025-10-09 10:01:49.592046 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-10-09 10:01:49.592057 | orchestrator | Thursday 09 October 2025 10:01:39 +0000 (0:00:00.680) 0:00:16.749 ****** 2025-10-09 10:01:49.592068 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:01:49.592079 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:01:49.592090 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:01:49.592100 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:01:49.592111 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:01:49.592121 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:01:49.592132 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:01:49.592143 | orchestrator | 2025-10-09 10:01:49.592154 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-10-09 10:01:49.592165 | orchestrator | Thursday 09 October 2025 10:01:39 +0000 (0:00:00.255) 0:00:17.005 ****** 2025-10-09 10:01:49.592176 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:01:49.592187 | orchestrator | ok: [testbed-manager] 2025-10-09 10:01:49.592198 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:01:49.592208 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:01:49.592219 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:01:49.592229 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:01:49.592240 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:01:49.592258 | orchestrator | 2025-10-09 10:01:49.592269 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-10-09 10:01:49.592280 | orchestrator | Thursday 09 October 2025 10:01:40 +0000 (0:00:00.552) 0:00:17.558 ****** 2025-10-09 10:01:49.592290 | orchestrator | ok: [testbed-manager] 2025-10-09 10:01:49.592301 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:01:49.592312 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:01:49.592322 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:01:49.592333 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:01:49.592344 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:01:49.592354 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:01:49.592365 | orchestrator | 2025-10-09 10:01:49.592376 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-10-09 10:01:49.592386 | orchestrator | Thursday 09 October 2025 10:01:41 +0000 (0:00:01.222) 0:00:18.780 ****** 2025-10-09 10:01:49.592397 | orchestrator | ok: [testbed-manager] 2025-10-09 10:01:49.592408 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:01:49.592419 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:49.592429 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:01:49.592440 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:01:49.592450 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:01:49.592461 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:49.592472 | orchestrator | 2025-10-09 10:01:49.592482 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-10-09 10:01:49.592493 | orchestrator | Thursday 09 October 2025 10:01:42 +0000 (0:00:01.281) 0:00:20.062 ****** 2025-10-09 10:01:49.592549 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:01:49.592562 | orchestrator | 2025-10-09 10:01:49.592573 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-10-09 10:01:49.592584 | orchestrator | Thursday 09 October 2025 10:01:43 +0000 (0:00:00.356) 0:00:20.419 ****** 2025-10-09 10:01:49.592595 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:01:49.592606 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:01:49.592617 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:01:49.592628 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:01:49.592639 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:01:49.592650 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:01:49.592660 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:01:49.592671 | orchestrator | 2025-10-09 10:01:49.592682 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-10-09 10:01:49.592693 | orchestrator | Thursday 09 October 2025 10:01:44 +0000 (0:00:01.459) 0:00:21.879 ****** 2025-10-09 10:01:49.592704 | orchestrator | ok: [testbed-manager] 2025-10-09 10:01:49.592715 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:01:49.592725 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:01:49.592736 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:01:49.592747 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:49.592758 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:01:49.592768 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:49.592779 | orchestrator | 2025-10-09 10:01:49.592790 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-10-09 10:01:49.592801 | orchestrator | Thursday 09 October 2025 10:01:45 +0000 (0:00:00.259) 0:00:22.139 ****** 2025-10-09 10:01:49.592812 | orchestrator | ok: [testbed-manager] 2025-10-09 10:01:49.592822 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:01:49.592833 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:01:49.592843 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:01:49.592854 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:49.592864 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:01:49.592880 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:49.592891 | orchestrator | 2025-10-09 10:01:49.592902 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-10-09 10:01:49.592920 | orchestrator | Thursday 09 October 2025 10:01:45 +0000 (0:00:00.252) 0:00:22.391 ****** 2025-10-09 10:01:49.592931 | orchestrator | ok: [testbed-manager] 2025-10-09 10:01:49.592942 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:01:49.592952 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:01:49.592963 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:01:49.592973 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:49.592984 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:01:49.592995 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:49.593005 | orchestrator | 2025-10-09 10:01:49.593016 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-10-09 10:01:49.593027 | orchestrator | Thursday 09 October 2025 10:01:45 +0000 (0:00:00.293) 0:00:22.685 ****** 2025-10-09 10:01:49.593039 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:01:49.593052 | orchestrator | 2025-10-09 10:01:49.593063 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-10-09 10:01:49.593074 | orchestrator | Thursday 09 October 2025 10:01:45 +0000 (0:00:00.311) 0:00:22.997 ****** 2025-10-09 10:01:49.593084 | orchestrator | ok: [testbed-manager] 2025-10-09 10:01:49.593095 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:01:49.593106 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:01:49.593117 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:01:49.593128 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:49.593138 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:49.593149 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:01:49.593160 | orchestrator | 2025-10-09 10:01:49.593171 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-10-09 10:01:49.593182 | orchestrator | Thursday 09 October 2025 10:01:46 +0000 (0:00:00.568) 0:00:23.565 ****** 2025-10-09 10:01:49.593193 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:01:49.593204 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:01:49.593215 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:01:49.593226 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:01:49.593236 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:01:49.593247 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:01:49.593258 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:01:49.593268 | orchestrator | 2025-10-09 10:01:49.593279 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-10-09 10:01:49.593290 | orchestrator | Thursday 09 October 2025 10:01:46 +0000 (0:00:00.265) 0:00:23.831 ****** 2025-10-09 10:01:49.593301 | orchestrator | ok: [testbed-manager] 2025-10-09 10:01:49.593311 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:01:49.593322 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:01:49.593333 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:01:49.593344 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:01:49.593354 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:49.593365 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:49.593376 | orchestrator | 2025-10-09 10:01:49.593387 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-10-09 10:01:49.593398 | orchestrator | Thursday 09 October 2025 10:01:47 +0000 (0:00:01.122) 0:00:24.953 ****** 2025-10-09 10:01:49.593408 | orchestrator | ok: [testbed-manager] 2025-10-09 10:01:49.593419 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:01:49.593430 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:01:49.593441 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:01:49.593451 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:49.593462 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:49.593473 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:01:49.593483 | orchestrator | 2025-10-09 10:01:49.593494 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-10-09 10:01:49.593505 | orchestrator | Thursday 09 October 2025 10:01:48 +0000 (0:00:00.592) 0:00:25.545 ****** 2025-10-09 10:01:49.593539 | orchestrator | ok: [testbed-manager] 2025-10-09 10:01:49.593551 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:01:49.593562 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:01:49.593573 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:01:49.593591 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:02:33.085191 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:02:33.085316 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:02:33.085334 | orchestrator | 2025-10-09 10:02:33.085348 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-10-09 10:02:33.085360 | orchestrator | Thursday 09 October 2025 10:01:49 +0000 (0:00:01.148) 0:00:26.693 ****** 2025-10-09 10:02:33.085372 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:02:33.085383 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:02:33.085394 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:02:33.085404 | orchestrator | changed: [testbed-manager] 2025-10-09 10:02:33.085415 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:02:33.085426 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:02:33.085437 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:02:33.085447 | orchestrator | 2025-10-09 10:02:33.085458 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-10-09 10:02:33.085469 | orchestrator | Thursday 09 October 2025 10:02:07 +0000 (0:00:17.877) 0:00:44.570 ****** 2025-10-09 10:02:33.085480 | orchestrator | ok: [testbed-manager] 2025-10-09 10:02:33.085491 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:02:33.085552 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:02:33.085564 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:02:33.085575 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:02:33.085585 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:02:33.085596 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:02:33.085607 | orchestrator | 2025-10-09 10:02:33.085617 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-10-09 10:02:33.085629 | orchestrator | Thursday 09 October 2025 10:02:07 +0000 (0:00:00.242) 0:00:44.813 ****** 2025-10-09 10:02:33.085640 | orchestrator | ok: [testbed-manager] 2025-10-09 10:02:33.085650 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:02:33.085661 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:02:33.085672 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:02:33.085682 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:02:33.085693 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:02:33.085703 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:02:33.085714 | orchestrator | 2025-10-09 10:02:33.085728 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-10-09 10:02:33.085742 | orchestrator | Thursday 09 October 2025 10:02:07 +0000 (0:00:00.250) 0:00:45.064 ****** 2025-10-09 10:02:33.085755 | orchestrator | ok: [testbed-manager] 2025-10-09 10:02:33.085768 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:02:33.085781 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:02:33.085794 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:02:33.085806 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:02:33.085818 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:02:33.085831 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:02:33.085844 | orchestrator | 2025-10-09 10:02:33.085857 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-10-09 10:02:33.085869 | orchestrator | Thursday 09 October 2025 10:02:08 +0000 (0:00:00.265) 0:00:45.329 ****** 2025-10-09 10:02:33.085883 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:02:33.085899 | orchestrator | 2025-10-09 10:02:33.085912 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-10-09 10:02:33.085926 | orchestrator | Thursday 09 October 2025 10:02:08 +0000 (0:00:00.303) 0:00:45.633 ****** 2025-10-09 10:02:33.085938 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:02:33.085977 | orchestrator | ok: [testbed-manager] 2025-10-09 10:02:33.085991 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:02:33.086003 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:02:33.086074 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:02:33.086102 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:02:33.086114 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:02:33.086125 | orchestrator | 2025-10-09 10:02:33.086136 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-10-09 10:02:33.086148 | orchestrator | Thursday 09 October 2025 10:02:10 +0000 (0:00:01.764) 0:00:47.398 ****** 2025-10-09 10:02:33.086159 | orchestrator | changed: [testbed-manager] 2025-10-09 10:02:33.086169 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:02:33.086180 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:02:33.086191 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:02:33.086202 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:02:33.086213 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:02:33.086224 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:02:33.086234 | orchestrator | 2025-10-09 10:02:33.086245 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-10-09 10:02:33.086256 | orchestrator | Thursday 09 October 2025 10:02:11 +0000 (0:00:01.069) 0:00:48.468 ****** 2025-10-09 10:02:33.086267 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:02:33.086278 | orchestrator | ok: [testbed-manager] 2025-10-09 10:02:33.086289 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:02:33.086299 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:02:33.086310 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:02:33.086321 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:02:33.086332 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:02:33.086342 | orchestrator | 2025-10-09 10:02:33.086353 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-10-09 10:02:33.086364 | orchestrator | Thursday 09 October 2025 10:02:12 +0000 (0:00:00.803) 0:00:49.271 ****** 2025-10-09 10:02:33.086393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:02:33.086406 | orchestrator | 2025-10-09 10:02:33.086418 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-10-09 10:02:33.086430 | orchestrator | Thursday 09 October 2025 10:02:12 +0000 (0:00:00.345) 0:00:49.616 ****** 2025-10-09 10:02:33.086441 | orchestrator | changed: [testbed-manager] 2025-10-09 10:02:33.086452 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:02:33.086463 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:02:33.086474 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:02:33.086485 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:02:33.086512 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:02:33.086524 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:02:33.086535 | orchestrator | 2025-10-09 10:02:33.086565 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-10-09 10:02:33.086577 | orchestrator | Thursday 09 October 2025 10:02:13 +0000 (0:00:01.082) 0:00:50.699 ****** 2025-10-09 10:02:33.086588 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:02:33.086598 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:02:33.086609 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:02:33.086620 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:02:33.086631 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:02:33.086642 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:02:33.086652 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:02:33.086663 | orchestrator | 2025-10-09 10:02:33.086674 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-10-09 10:02:33.086685 | orchestrator | Thursday 09 October 2025 10:02:13 +0000 (0:00:00.307) 0:00:51.006 ****** 2025-10-09 10:02:33.086696 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:02:33.086707 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:02:33.086718 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:02:33.086738 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:02:33.086749 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:02:33.086760 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:02:33.086771 | orchestrator | changed: [testbed-manager] 2025-10-09 10:02:33.086782 | orchestrator | 2025-10-09 10:02:33.086793 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-10-09 10:02:33.086803 | orchestrator | Thursday 09 October 2025 10:02:27 +0000 (0:00:13.465) 0:01:04.472 ****** 2025-10-09 10:02:33.086814 | orchestrator | ok: [testbed-manager] 2025-10-09 10:02:33.086825 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:02:33.086836 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:02:33.086847 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:02:33.086858 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:02:33.086868 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:02:33.086879 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:02:33.086890 | orchestrator | 2025-10-09 10:02:33.086901 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-10-09 10:02:33.086918 | orchestrator | Thursday 09 October 2025 10:02:28 +0000 (0:00:01.332) 0:01:05.804 ****** 2025-10-09 10:02:33.086929 | orchestrator | ok: [testbed-manager] 2025-10-09 10:02:33.086940 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:02:33.086951 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:02:33.086962 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:02:33.086973 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:02:33.086983 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:02:33.086994 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:02:33.087005 | orchestrator | 2025-10-09 10:02:33.087016 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-10-09 10:02:33.087027 | orchestrator | Thursday 09 October 2025 10:02:29 +0000 (0:00:00.914) 0:01:06.719 ****** 2025-10-09 10:02:33.087038 | orchestrator | ok: [testbed-manager] 2025-10-09 10:02:33.087049 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:02:33.087060 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:02:33.087071 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:02:33.087082 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:02:33.087093 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:02:33.087103 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:02:33.087114 | orchestrator | 2025-10-09 10:02:33.087126 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-10-09 10:02:33.087137 | orchestrator | Thursday 09 October 2025 10:02:29 +0000 (0:00:00.245) 0:01:06.965 ****** 2025-10-09 10:02:33.087148 | orchestrator | ok: [testbed-manager] 2025-10-09 10:02:33.087159 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:02:33.087170 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:02:33.087180 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:02:33.087191 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:02:33.087202 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:02:33.087213 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:02:33.087224 | orchestrator | 2025-10-09 10:02:33.087235 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-10-09 10:02:33.087246 | orchestrator | Thursday 09 October 2025 10:02:30 +0000 (0:00:00.240) 0:01:07.205 ****** 2025-10-09 10:02:33.087258 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:02:33.087269 | orchestrator | 2025-10-09 10:02:33.087280 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-10-09 10:02:33.087291 | orchestrator | Thursday 09 October 2025 10:02:30 +0000 (0:00:00.302) 0:01:07.507 ****** 2025-10-09 10:02:33.087302 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:02:33.087313 | orchestrator | ok: [testbed-manager] 2025-10-09 10:02:33.087324 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:02:33.087335 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:02:33.087346 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:02:33.087364 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:02:33.087375 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:02:33.087386 | orchestrator | 2025-10-09 10:02:33.087397 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-10-09 10:02:33.087408 | orchestrator | Thursday 09 October 2025 10:02:32 +0000 (0:00:01.833) 0:01:09.340 ****** 2025-10-09 10:02:33.087419 | orchestrator | changed: [testbed-manager] 2025-10-09 10:02:33.087430 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:02:33.087441 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:02:33.087453 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:02:33.087464 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:02:33.087475 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:02:33.087485 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:02:33.087510 | orchestrator | 2025-10-09 10:02:33.087522 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-10-09 10:02:33.087533 | orchestrator | Thursday 09 October 2025 10:02:32 +0000 (0:00:00.586) 0:01:09.927 ****** 2025-10-09 10:02:33.087544 | orchestrator | ok: [testbed-manager] 2025-10-09 10:02:33.087555 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:02:33.087566 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:02:33.087577 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:02:33.087588 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:02:33.087598 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:02:33.087609 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:02:33.087620 | orchestrator | 2025-10-09 10:02:33.087639 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-10-09 10:04:54.615433 | orchestrator | Thursday 09 October 2025 10:02:33 +0000 (0:00:00.262) 0:01:10.189 ****** 2025-10-09 10:04:54.615561 | orchestrator | ok: [testbed-manager] 2025-10-09 10:04:54.615576 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:04:54.615587 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:04:54.615597 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:04:54.615607 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:04:54.615616 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:04:54.615626 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:04:54.615636 | orchestrator | 2025-10-09 10:04:54.615647 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-10-09 10:04:54.615657 | orchestrator | Thursday 09 October 2025 10:02:34 +0000 (0:00:01.275) 0:01:11.465 ****** 2025-10-09 10:04:54.615667 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:04:54.615677 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:04:54.615687 | orchestrator | changed: [testbed-manager] 2025-10-09 10:04:54.615697 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:04:54.615707 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:04:54.615716 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:04:54.615726 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:04:54.615736 | orchestrator | 2025-10-09 10:04:54.615747 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-10-09 10:04:54.615756 | orchestrator | Thursday 09 October 2025 10:02:36 +0000 (0:00:01.934) 0:01:13.399 ****** 2025-10-09 10:04:54.615766 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:04:54.615776 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:04:54.615786 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:04:54.615795 | orchestrator | ok: [testbed-manager] 2025-10-09 10:04:54.615805 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:04:54.615814 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:04:54.615824 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:04:54.615833 | orchestrator | 2025-10-09 10:04:54.615843 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-10-09 10:04:54.615853 | orchestrator | Thursday 09 October 2025 10:02:38 +0000 (0:00:02.644) 0:01:16.043 ****** 2025-10-09 10:04:54.615862 | orchestrator | ok: [testbed-manager] 2025-10-09 10:04:54.615886 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:04:54.615896 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:04:54.615906 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:04:54.615945 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:04:54.615957 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:04:54.615968 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:04:54.615979 | orchestrator | 2025-10-09 10:04:54.615990 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-10-09 10:04:54.616002 | orchestrator | Thursday 09 October 2025 10:03:17 +0000 (0:00:38.404) 0:01:54.448 ****** 2025-10-09 10:04:54.616013 | orchestrator | changed: [testbed-manager] 2025-10-09 10:04:54.616024 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:04:54.616035 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:04:54.616046 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:04:54.616057 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:04:54.616068 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:04:54.616079 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:04:54.616090 | orchestrator | 2025-10-09 10:04:54.616101 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-10-09 10:04:54.616112 | orchestrator | Thursday 09 October 2025 10:04:37 +0000 (0:01:19.926) 0:03:14.375 ****** 2025-10-09 10:04:54.616123 | orchestrator | ok: [testbed-manager] 2025-10-09 10:04:54.616134 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:04:54.616145 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:04:54.616156 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:04:54.616167 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:04:54.616178 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:04:54.616188 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:04:54.616199 | orchestrator | 2025-10-09 10:04:54.616211 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-10-09 10:04:54.616223 | orchestrator | Thursday 09 October 2025 10:04:38 +0000 (0:00:01.734) 0:03:16.109 ****** 2025-10-09 10:04:54.616233 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:04:54.616244 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:04:54.616255 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:04:54.616266 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:04:54.616277 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:04:54.616288 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:04:54.616299 | orchestrator | changed: [testbed-manager] 2025-10-09 10:04:54.616309 | orchestrator | 2025-10-09 10:04:54.616319 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-10-09 10:04:54.616329 | orchestrator | Thursday 09 October 2025 10:04:52 +0000 (0:00:13.267) 0:03:29.377 ****** 2025-10-09 10:04:54.616346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-10-09 10:04:54.616362 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-10-09 10:04:54.616396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-10-09 10:04:54.616413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-10-09 10:04:54.616433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-10-09 10:04:54.616465 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-10-09 10:04:54.616475 | orchestrator | 2025-10-09 10:04:54.616485 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-10-09 10:04:54.616495 | orchestrator | Thursday 09 October 2025 10:04:52 +0000 (0:00:00.458) 0:03:29.835 ****** 2025-10-09 10:04:54.616505 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-10-09 10:04:54.616515 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:04:54.616525 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-10-09 10:04:54.616534 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-10-09 10:04:54.616544 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:04:54.616554 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:04:54.616563 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-10-09 10:04:54.616573 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:04:54.616582 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-10-09 10:04:54.616592 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-10-09 10:04:54.616602 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-10-09 10:04:54.616611 | orchestrator | 2025-10-09 10:04:54.616621 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-10-09 10:04:54.616630 | orchestrator | Thursday 09 October 2025 10:04:54 +0000 (0:00:01.660) 0:03:31.496 ****** 2025-10-09 10:04:54.616640 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-10-09 10:04:54.616650 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-10-09 10:04:54.616660 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-10-09 10:04:54.616669 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-10-09 10:04:54.616679 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-10-09 10:04:54.616688 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-10-09 10:04:54.616698 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-10-09 10:04:54.616707 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-10-09 10:04:54.616717 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-10-09 10:04:54.616726 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-10-09 10:04:54.616736 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:04:54.616745 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-10-09 10:04:54.616762 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-10-09 10:04:54.616771 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-10-09 10:04:54.616781 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-10-09 10:04:54.616790 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-10-09 10:04:54.616800 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-10-09 10:04:54.616809 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-10-09 10:04:54.616825 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-10-09 10:05:00.842581 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-10-09 10:05:00.842690 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-10-09 10:05:00.842707 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-10-09 10:05:00.842734 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-10-09 10:05:00.842744 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-10-09 10:05:00.842752 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-10-09 10:05:00.842761 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-10-09 10:05:00.842769 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:05:00.842780 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-10-09 10:05:00.842793 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-10-09 10:05:00.842802 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-10-09 10:05:00.842810 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-10-09 10:05:00.842819 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-10-09 10:05:00.842833 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:05:00.842841 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-10-09 10:05:00.842849 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-10-09 10:05:00.842857 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-10-09 10:05:00.842865 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-10-09 10:05:00.842872 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-10-09 10:05:00.842880 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-10-09 10:05:00.842888 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-10-09 10:05:00.842896 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-10-09 10:05:00.842903 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-10-09 10:05:00.842911 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-10-09 10:05:00.842918 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:05:00.842925 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-10-09 10:05:00.842955 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-10-09 10:05:00.842964 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-10-09 10:05:00.842971 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-10-09 10:05:00.842979 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-10-09 10:05:00.842986 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-10-09 10:05:00.842994 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-10-09 10:05:00.843001 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-10-09 10:05:00.843010 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-10-09 10:05:00.843017 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-10-09 10:05:00.843025 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-10-09 10:05:00.843032 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-10-09 10:05:00.843037 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-10-09 10:05:00.843041 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-10-09 10:05:00.843046 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-10-09 10:05:00.843050 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-10-09 10:05:00.843055 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-10-09 10:05:00.843060 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-10-09 10:05:00.843080 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-10-09 10:05:00.843086 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-10-09 10:05:00.843091 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-10-09 10:05:00.843097 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-10-09 10:05:00.843102 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-10-09 10:05:00.843107 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-10-09 10:05:00.843113 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-10-09 10:05:00.843118 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-10-09 10:05:00.843123 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-10-09 10:05:00.843129 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-10-09 10:05:00.843134 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-10-09 10:05:00.843139 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-10-09 10:05:00.843145 | orchestrator | 2025-10-09 10:05:00.843152 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-10-09 10:05:00.843161 | orchestrator | Thursday 09 October 2025 10:04:58 +0000 (0:00:03.648) 0:03:35.145 ****** 2025-10-09 10:05:00.843167 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-10-09 10:05:00.843172 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-10-09 10:05:00.843183 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-10-09 10:05:00.843189 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-10-09 10:05:00.843194 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-10-09 10:05:00.843200 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-10-09 10:05:00.843205 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-10-09 10:05:00.843211 | orchestrator | 2025-10-09 10:05:00.843216 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-10-09 10:05:00.843222 | orchestrator | Thursday 09 October 2025 10:04:59 +0000 (0:00:01.688) 0:03:36.833 ****** 2025-10-09 10:05:00.843227 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-09 10:05:00.843233 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:05:00.843237 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-09 10:05:00.843242 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-09 10:05:00.843246 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:05:00.843251 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:05:00.843256 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-09 10:05:00.843261 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:05:00.843266 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-10-09 10:05:00.843270 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-10-09 10:05:00.843275 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-10-09 10:05:00.843279 | orchestrator | 2025-10-09 10:05:00.843284 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2025-10-09 10:05:00.843289 | orchestrator | Thursday 09 October 2025 10:05:00 +0000 (0:00:00.620) 0:03:37.453 ****** 2025-10-09 10:05:00.843293 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-09 10:05:00.843298 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:05:00.843302 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-09 10:05:00.843307 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-09 10:05:00.843311 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:05:00.843316 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:05:00.843320 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-10-09 10:05:00.843325 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:05:00.843330 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-10-09 10:05:00.843334 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-10-09 10:05:00.843339 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-10-09 10:05:00.843343 | orchestrator | 2025-10-09 10:05:00.843351 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-10-09 10:05:15.033531 | orchestrator | Thursday 09 October 2025 10:05:00 +0000 (0:00:00.490) 0:03:37.944 ****** 2025-10-09 10:05:15.033644 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-10-09 10:05:15.033661 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:05:15.033674 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-10-09 10:05:15.033710 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:05:15.033722 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-10-09 10:05:15.033733 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-10-09 10:05:15.033744 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:05:15.033755 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:05:15.033765 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-10-09 10:05:15.033776 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-10-09 10:05:15.033787 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-10-09 10:05:15.033798 | orchestrator | 2025-10-09 10:05:15.033810 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-10-09 10:05:15.033822 | orchestrator | Thursday 09 October 2025 10:05:02 +0000 (0:00:01.699) 0:03:39.643 ****** 2025-10-09 10:05:15.033832 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:05:15.033843 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:05:15.033870 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:05:15.033881 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:05:15.033892 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:05:15.033903 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:05:15.033913 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:05:15.033924 | orchestrator | 2025-10-09 10:05:15.033935 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-10-09 10:05:15.033945 | orchestrator | Thursday 09 October 2025 10:05:02 +0000 (0:00:00.334) 0:03:39.978 ****** 2025-10-09 10:05:15.033956 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:05:15.033968 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:05:15.033979 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:05:15.033989 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:05:15.034000 | orchestrator | ok: [testbed-manager] 2025-10-09 10:05:15.034011 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:05:15.034088 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:05:15.034101 | orchestrator | 2025-10-09 10:05:15.034114 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-10-09 10:05:15.034127 | orchestrator | Thursday 09 October 2025 10:05:08 +0000 (0:00:05.918) 0:03:45.897 ****** 2025-10-09 10:05:15.034140 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-10-09 10:05:15.034153 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:05:15.034166 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-10-09 10:05:15.034179 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-10-09 10:05:15.034191 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:05:15.034205 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:05:15.034218 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-10-09 10:05:15.034231 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-10-09 10:05:15.034243 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:05:15.034256 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:05:15.034269 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-10-09 10:05:15.034281 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:05:15.034294 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-10-09 10:05:15.034306 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:05:15.034319 | orchestrator | 2025-10-09 10:05:15.034331 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-10-09 10:05:15.034344 | orchestrator | Thursday 09 October 2025 10:05:09 +0000 (0:00:00.335) 0:03:46.232 ****** 2025-10-09 10:05:15.034358 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-10-09 10:05:15.034371 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-10-09 10:05:15.034382 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-10-09 10:05:15.034401 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-10-09 10:05:15.034412 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-10-09 10:05:15.034422 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-10-09 10:05:15.034451 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-10-09 10:05:15.034462 | orchestrator | 2025-10-09 10:05:15.034473 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-10-09 10:05:15.034484 | orchestrator | Thursday 09 October 2025 10:05:10 +0000 (0:00:01.037) 0:03:47.269 ****** 2025-10-09 10:05:15.034498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:05:15.034511 | orchestrator | 2025-10-09 10:05:15.034522 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-10-09 10:05:15.034533 | orchestrator | Thursday 09 October 2025 10:05:10 +0000 (0:00:00.549) 0:03:47.819 ****** 2025-10-09 10:05:15.034544 | orchestrator | ok: [testbed-manager] 2025-10-09 10:05:15.034555 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:05:15.034565 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:05:15.034576 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:05:15.034587 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:05:15.034597 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:05:15.034608 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:05:15.034619 | orchestrator | 2025-10-09 10:05:15.034630 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-10-09 10:05:15.034640 | orchestrator | Thursday 09 October 2025 10:05:12 +0000 (0:00:01.447) 0:03:49.267 ****** 2025-10-09 10:05:15.034651 | orchestrator | ok: [testbed-manager] 2025-10-09 10:05:15.034681 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:05:15.034692 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:05:15.034703 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:05:15.034714 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:05:15.034724 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:05:15.034735 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:05:15.034745 | orchestrator | 2025-10-09 10:05:15.034756 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-10-09 10:05:15.034767 | orchestrator | Thursday 09 October 2025 10:05:12 +0000 (0:00:00.612) 0:03:49.880 ****** 2025-10-09 10:05:15.034778 | orchestrator | changed: [testbed-manager] 2025-10-09 10:05:15.034788 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:05:15.034799 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:05:15.034810 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:05:15.034820 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:05:15.034831 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:05:15.034841 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:05:15.034852 | orchestrator | 2025-10-09 10:05:15.034863 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-10-09 10:05:15.034874 | orchestrator | Thursday 09 October 2025 10:05:13 +0000 (0:00:00.595) 0:03:50.476 ****** 2025-10-09 10:05:15.034884 | orchestrator | ok: [testbed-manager] 2025-10-09 10:05:15.034895 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:05:15.034905 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:05:15.034916 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:05:15.034927 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:05:15.034937 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:05:15.034948 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:05:15.034958 | orchestrator | 2025-10-09 10:05:15.034969 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-10-09 10:05:15.034980 | orchestrator | Thursday 09 October 2025 10:05:14 +0000 (0:00:00.652) 0:03:51.128 ****** 2025-10-09 10:05:15.034996 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1760002764.9855871, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:05:15.035017 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1760002794.1042655, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:05:15.035029 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1760002794.3453624, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:05:15.035048 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1760002801.7862096, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:05:15.035060 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1760002808.5951574, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:05:15.035079 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1760002798.3842175, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:05:31.989702 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1760002802.3639119, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:05:31.989840 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:05:31.989883 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:05:31.989896 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:05:31.989907 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:05:31.989918 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:05:31.989929 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:05:31.989968 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:05:31.989981 | orchestrator | 2025-10-09 10:05:31.989995 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-10-09 10:05:31.990007 | orchestrator | Thursday 09 October 2025 10:05:15 +0000 (0:00:01.005) 0:03:52.133 ****** 2025-10-09 10:05:31.990093 | orchestrator | changed: [testbed-manager] 2025-10-09 10:05:31.990107 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:05:31.990118 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:05:31.990129 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:05:31.990139 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:05:31.990150 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:05:31.990161 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:05:31.990172 | orchestrator | 2025-10-09 10:05:31.990188 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-10-09 10:05:31.990199 | orchestrator | Thursday 09 October 2025 10:05:16 +0000 (0:00:01.227) 0:03:53.361 ****** 2025-10-09 10:05:31.990210 | orchestrator | changed: [testbed-manager] 2025-10-09 10:05:31.990222 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:05:31.990234 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:05:31.990247 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:05:31.990259 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:05:31.990271 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:05:31.990283 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:05:31.990294 | orchestrator | 2025-10-09 10:05:31.990307 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-10-09 10:05:31.990319 | orchestrator | Thursday 09 October 2025 10:05:18 +0000 (0:00:02.173) 0:03:55.534 ****** 2025-10-09 10:05:31.990332 | orchestrator | changed: [testbed-manager] 2025-10-09 10:05:31.990344 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:05:31.990356 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:05:31.990369 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:05:31.990381 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:05:31.990394 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:05:31.990406 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:05:31.990419 | orchestrator | 2025-10-09 10:05:31.990458 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-10-09 10:05:31.990472 | orchestrator | Thursday 09 October 2025 10:05:19 +0000 (0:00:01.182) 0:03:56.717 ****** 2025-10-09 10:05:31.990484 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:05:31.990497 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:05:31.990510 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:05:31.990522 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:05:31.990535 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:05:31.990547 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:05:31.990560 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:05:31.990572 | orchestrator | 2025-10-09 10:05:31.990584 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-10-09 10:05:31.990595 | orchestrator | Thursday 09 October 2025 10:05:19 +0000 (0:00:00.320) 0:03:57.037 ****** 2025-10-09 10:05:31.990606 | orchestrator | ok: [testbed-manager] 2025-10-09 10:05:31.990617 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:05:31.990628 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:05:31.990639 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:05:31.990650 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:05:31.990660 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:05:31.990671 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:05:31.990682 | orchestrator | 2025-10-09 10:05:31.990693 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-10-09 10:05:31.990704 | orchestrator | Thursday 09 October 2025 10:05:20 +0000 (0:00:00.763) 0:03:57.801 ****** 2025-10-09 10:05:31.990716 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:05:31.990729 | orchestrator | 2025-10-09 10:05:31.990740 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-10-09 10:05:31.990751 | orchestrator | Thursday 09 October 2025 10:05:21 +0000 (0:00:00.424) 0:03:58.225 ****** 2025-10-09 10:05:31.990769 | orchestrator | ok: [testbed-manager] 2025-10-09 10:05:31.990780 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:05:31.990791 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:05:31.990801 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:05:31.990812 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:05:31.990823 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:05:31.990834 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:05:31.990844 | orchestrator | 2025-10-09 10:05:31.990856 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-10-09 10:05:31.990866 | orchestrator | Thursday 09 October 2025 10:05:29 +0000 (0:00:08.471) 0:04:06.696 ****** 2025-10-09 10:05:31.990877 | orchestrator | ok: [testbed-manager] 2025-10-09 10:05:31.990888 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:05:31.990898 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:05:31.990909 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:05:31.990920 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:05:31.990930 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:05:31.990941 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:05:31.990952 | orchestrator | 2025-10-09 10:05:31.990963 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-10-09 10:05:31.990974 | orchestrator | Thursday 09 October 2025 10:05:30 +0000 (0:00:01.326) 0:04:08.023 ****** 2025-10-09 10:05:31.990984 | orchestrator | ok: [testbed-manager] 2025-10-09 10:05:31.990995 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:05:31.991006 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:05:31.991016 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:05:31.991027 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:05:31.991037 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:05:31.991048 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:05:31.991058 | orchestrator | 2025-10-09 10:05:31.991077 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-10-09 10:06:42.455654 | orchestrator | Thursday 09 October 2025 10:05:31 +0000 (0:00:01.062) 0:04:09.086 ****** 2025-10-09 10:06:42.455767 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:42.455782 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:06:42.455792 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:06:42.455802 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:06:42.455811 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:06:42.455821 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:06:42.455830 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:06:42.455840 | orchestrator | 2025-10-09 10:06:42.455851 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-10-09 10:06:42.455862 | orchestrator | Thursday 09 October 2025 10:05:32 +0000 (0:00:00.427) 0:04:09.514 ****** 2025-10-09 10:06:42.455872 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:42.455881 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:06:42.455891 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:06:42.455900 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:06:42.455909 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:06:42.455919 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:06:42.455928 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:06:42.455937 | orchestrator | 2025-10-09 10:06:42.455963 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-10-09 10:06:42.455973 | orchestrator | Thursday 09 October 2025 10:05:32 +0000 (0:00:00.346) 0:04:09.861 ****** 2025-10-09 10:06:42.455983 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:42.455992 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:06:42.456001 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:06:42.456012 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:06:42.456021 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:06:42.456030 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:06:42.456040 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:06:42.456049 | orchestrator | 2025-10-09 10:06:42.456059 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-10-09 10:06:42.456069 | orchestrator | Thursday 09 October 2025 10:05:33 +0000 (0:00:00.315) 0:04:10.177 ****** 2025-10-09 10:06:42.456100 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:06:42.456111 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:42.456120 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:06:42.456130 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:06:42.456139 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:06:42.456148 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:06:42.456158 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:06:42.456167 | orchestrator | 2025-10-09 10:06:42.456177 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-10-09 10:06:42.456188 | orchestrator | Thursday 09 October 2025 10:05:38 +0000 (0:00:05.729) 0:04:15.907 ****** 2025-10-09 10:06:42.456201 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:06:42.456214 | orchestrator | 2025-10-09 10:06:42.456225 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-10-09 10:06:42.456237 | orchestrator | Thursday 09 October 2025 10:05:39 +0000 (0:00:00.441) 0:04:16.348 ****** 2025-10-09 10:06:42.456248 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-10-09 10:06:42.456259 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-10-09 10:06:42.456270 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-10-09 10:06:42.456282 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-10-09 10:06:42.456293 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:06:42.456304 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-10-09 10:06:42.456321 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-10-09 10:06:42.456338 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:06:42.456355 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-10-09 10:06:42.456373 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-10-09 10:06:42.456389 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:06:42.456406 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-10-09 10:06:42.456452 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-10-09 10:06:42.456467 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:06:42.456484 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-10-09 10:06:42.456501 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:06:42.456517 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-10-09 10:06:42.456534 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:06:42.456551 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-10-09 10:06:42.456568 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-10-09 10:06:42.456579 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:06:42.456589 | orchestrator | 2025-10-09 10:06:42.456599 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-10-09 10:06:42.456608 | orchestrator | Thursday 09 October 2025 10:05:39 +0000 (0:00:00.331) 0:04:16.679 ****** 2025-10-09 10:06:42.456618 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:06:42.456628 | orchestrator | 2025-10-09 10:06:42.456638 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-10-09 10:06:42.456647 | orchestrator | Thursday 09 October 2025 10:05:40 +0000 (0:00:00.449) 0:04:17.129 ****** 2025-10-09 10:06:42.456657 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-10-09 10:06:42.456666 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:06:42.456676 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-10-09 10:06:42.456686 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-10-09 10:06:42.456705 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:06:42.456733 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-10-09 10:06:42.456743 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:06:42.456752 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:06:42.456762 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-10-09 10:06:42.456771 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-10-09 10:06:42.456781 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:06:42.456790 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:06:42.456800 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-10-09 10:06:42.456809 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:06:42.456818 | orchestrator | 2025-10-09 10:06:42.456828 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-10-09 10:06:42.456838 | orchestrator | Thursday 09 October 2025 10:05:40 +0000 (0:00:00.338) 0:04:17.467 ****** 2025-10-09 10:06:42.456847 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:06:42.456857 | orchestrator | 2025-10-09 10:06:42.456867 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-10-09 10:06:42.456876 | orchestrator | Thursday 09 October 2025 10:05:40 +0000 (0:00:00.423) 0:04:17.890 ****** 2025-10-09 10:06:42.456886 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:42.456895 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:42.456904 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:42.456914 | orchestrator | changed: [testbed-manager] 2025-10-09 10:06:42.456923 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:42.456933 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:42.456942 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:42.456951 | orchestrator | 2025-10-09 10:06:42.456961 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-10-09 10:06:42.456970 | orchestrator | Thursday 09 October 2025 10:06:15 +0000 (0:00:34.829) 0:04:52.720 ****** 2025-10-09 10:06:42.456979 | orchestrator | changed: [testbed-manager] 2025-10-09 10:06:42.456989 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:42.456998 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:42.457008 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:42.457017 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:42.457026 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:42.457036 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:42.457045 | orchestrator | 2025-10-09 10:06:42.457055 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-10-09 10:06:42.457064 | orchestrator | Thursday 09 October 2025 10:06:23 +0000 (0:00:08.041) 0:05:00.762 ****** 2025-10-09 10:06:42.457074 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:42.457083 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:42.457092 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:42.457102 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:42.457111 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:42.457120 | orchestrator | changed: [testbed-manager] 2025-10-09 10:06:42.457130 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:42.457139 | orchestrator | 2025-10-09 10:06:42.457149 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-10-09 10:06:42.457158 | orchestrator | Thursday 09 October 2025 10:06:31 +0000 (0:00:07.837) 0:05:08.600 ****** 2025-10-09 10:06:42.457168 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:06:42.457177 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:06:42.457186 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:06:42.457196 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:42.457205 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:06:42.457220 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:06:42.457230 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:06:42.457239 | orchestrator | 2025-10-09 10:06:42.457249 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-10-09 10:06:42.457258 | orchestrator | Thursday 09 October 2025 10:06:33 +0000 (0:00:01.731) 0:05:10.331 ****** 2025-10-09 10:06:42.457268 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:42.457277 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:42.457286 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:42.457296 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:42.457305 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:42.457314 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:42.457324 | orchestrator | changed: [testbed-manager] 2025-10-09 10:06:42.457333 | orchestrator | 2025-10-09 10:06:42.457342 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-10-09 10:06:42.457352 | orchestrator | Thursday 09 October 2025 10:06:39 +0000 (0:00:05.902) 0:05:16.233 ****** 2025-10-09 10:06:42.457371 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:06:42.457383 | orchestrator | 2025-10-09 10:06:42.457392 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-10-09 10:06:42.457402 | orchestrator | Thursday 09 October 2025 10:06:39 +0000 (0:00:00.626) 0:05:16.860 ****** 2025-10-09 10:06:42.457432 | orchestrator | changed: [testbed-manager] 2025-10-09 10:06:42.457443 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:42.457452 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:42.457462 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:42.457471 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:42.457481 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:42.457490 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:42.457500 | orchestrator | 2025-10-09 10:06:42.457510 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-10-09 10:06:42.457519 | orchestrator | Thursday 09 October 2025 10:06:40 +0000 (0:00:00.729) 0:05:17.589 ****** 2025-10-09 10:06:42.457529 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:42.457538 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:06:42.457548 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:06:42.457557 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:06:42.457573 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:06:57.830722 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:06:57.830867 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:06:57.830894 | orchestrator | 2025-10-09 10:06:57.830918 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-10-09 10:06:57.830939 | orchestrator | Thursday 09 October 2025 10:06:42 +0000 (0:00:01.965) 0:05:19.555 ****** 2025-10-09 10:06:57.830958 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:57.830978 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:57.830997 | orchestrator | changed: [testbed-manager] 2025-10-09 10:06:57.831015 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:57.831034 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:57.831054 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:57.831072 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:57.831091 | orchestrator | 2025-10-09 10:06:57.831111 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-10-09 10:06:57.831129 | orchestrator | Thursday 09 October 2025 10:06:43 +0000 (0:00:00.835) 0:05:20.391 ****** 2025-10-09 10:06:57.831148 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:06:57.831165 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:06:57.831204 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:06:57.831225 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:06:57.831245 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:06:57.831263 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:06:57.831318 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:06:57.831337 | orchestrator | 2025-10-09 10:06:57.831349 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-10-09 10:06:57.831362 | orchestrator | Thursday 09 October 2025 10:06:43 +0000 (0:00:00.298) 0:05:20.689 ****** 2025-10-09 10:06:57.831375 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:06:57.831388 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:06:57.831400 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:06:57.831443 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:06:57.831456 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:06:57.831469 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:06:57.831482 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:06:57.831494 | orchestrator | 2025-10-09 10:06:57.831506 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-10-09 10:06:57.831519 | orchestrator | Thursday 09 October 2025 10:06:44 +0000 (0:00:00.448) 0:05:21.138 ****** 2025-10-09 10:06:57.831532 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:57.831544 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:06:57.831556 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:06:57.831569 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:06:57.831582 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:06:57.831594 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:06:57.831605 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:06:57.831615 | orchestrator | 2025-10-09 10:06:57.831626 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-10-09 10:06:57.831637 | orchestrator | Thursday 09 October 2025 10:06:44 +0000 (0:00:00.289) 0:05:21.427 ****** 2025-10-09 10:06:57.831648 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:06:57.831658 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:06:57.831669 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:06:57.831680 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:06:57.831690 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:06:57.831701 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:06:57.831711 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:06:57.831722 | orchestrator | 2025-10-09 10:06:57.831733 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-10-09 10:06:57.831745 | orchestrator | Thursday 09 October 2025 10:06:44 +0000 (0:00:00.315) 0:05:21.742 ****** 2025-10-09 10:06:57.831756 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:57.831766 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:06:57.831777 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:06:57.831787 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:06:57.831798 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:06:57.831809 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:06:57.831819 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:06:57.831830 | orchestrator | 2025-10-09 10:06:57.831840 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-10-09 10:06:57.831851 | orchestrator | Thursday 09 October 2025 10:06:44 +0000 (0:00:00.315) 0:05:22.058 ****** 2025-10-09 10:06:57.831862 | orchestrator | ok: [testbed-manager] =>  2025-10-09 10:06:57.831872 | orchestrator |  docker_version: 5:27.5.1 2025-10-09 10:06:57.831883 | orchestrator | ok: [testbed-node-0] =>  2025-10-09 10:06:57.831894 | orchestrator |  docker_version: 5:27.5.1 2025-10-09 10:06:57.831904 | orchestrator | ok: [testbed-node-1] =>  2025-10-09 10:06:57.831915 | orchestrator |  docker_version: 5:27.5.1 2025-10-09 10:06:57.831925 | orchestrator | ok: [testbed-node-2] =>  2025-10-09 10:06:57.831936 | orchestrator |  docker_version: 5:27.5.1 2025-10-09 10:06:57.831946 | orchestrator | ok: [testbed-node-3] =>  2025-10-09 10:06:57.831957 | orchestrator |  docker_version: 5:27.5.1 2025-10-09 10:06:57.831967 | orchestrator | ok: [testbed-node-4] =>  2025-10-09 10:06:57.831978 | orchestrator |  docker_version: 5:27.5.1 2025-10-09 10:06:57.831989 | orchestrator | ok: [testbed-node-5] =>  2025-10-09 10:06:57.831999 | orchestrator |  docker_version: 5:27.5.1 2025-10-09 10:06:57.832010 | orchestrator | 2025-10-09 10:06:57.832029 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-10-09 10:06:57.832041 | orchestrator | Thursday 09 October 2025 10:06:45 +0000 (0:00:00.341) 0:05:22.400 ****** 2025-10-09 10:06:57.832051 | orchestrator | ok: [testbed-manager] =>  2025-10-09 10:06:57.832062 | orchestrator |  docker_cli_version: 5:27.5.1 2025-10-09 10:06:57.832072 | orchestrator | ok: [testbed-node-0] =>  2025-10-09 10:06:57.832083 | orchestrator |  docker_cli_version: 5:27.5.1 2025-10-09 10:06:57.832093 | orchestrator | ok: [testbed-node-1] =>  2025-10-09 10:06:57.832104 | orchestrator |  docker_cli_version: 5:27.5.1 2025-10-09 10:06:57.832114 | orchestrator | ok: [testbed-node-2] =>  2025-10-09 10:06:57.832125 | orchestrator |  docker_cli_version: 5:27.5.1 2025-10-09 10:06:57.832136 | orchestrator | ok: [testbed-node-3] =>  2025-10-09 10:06:57.832146 | orchestrator |  docker_cli_version: 5:27.5.1 2025-10-09 10:06:57.832156 | orchestrator | ok: [testbed-node-4] =>  2025-10-09 10:06:57.832167 | orchestrator |  docker_cli_version: 5:27.5.1 2025-10-09 10:06:57.832177 | orchestrator | ok: [testbed-node-5] =>  2025-10-09 10:06:57.832188 | orchestrator |  docker_cli_version: 5:27.5.1 2025-10-09 10:06:57.832199 | orchestrator | 2025-10-09 10:06:57.832210 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-10-09 10:06:57.832239 | orchestrator | Thursday 09 October 2025 10:06:45 +0000 (0:00:00.354) 0:05:22.754 ****** 2025-10-09 10:06:57.832251 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:06:57.832261 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:06:57.832272 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:06:57.832282 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:06:57.832293 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:06:57.832304 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:06:57.832314 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:06:57.832325 | orchestrator | 2025-10-09 10:06:57.832335 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-10-09 10:06:57.832346 | orchestrator | Thursday 09 October 2025 10:06:45 +0000 (0:00:00.306) 0:05:23.061 ****** 2025-10-09 10:06:57.832357 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:06:57.832367 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:06:57.832378 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:06:57.832389 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:06:57.832399 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:06:57.832426 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:06:57.832437 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:06:57.832448 | orchestrator | 2025-10-09 10:06:57.832466 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-10-09 10:06:57.832477 | orchestrator | Thursday 09 October 2025 10:06:46 +0000 (0:00:00.313) 0:05:23.375 ****** 2025-10-09 10:06:57.832489 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:06:57.832503 | orchestrator | 2025-10-09 10:06:57.832514 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-10-09 10:06:57.832525 | orchestrator | Thursday 09 October 2025 10:06:46 +0000 (0:00:00.492) 0:05:23.868 ****** 2025-10-09 10:06:57.832536 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:57.832546 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:06:57.832557 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:06:57.832568 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:06:57.832579 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:06:57.832589 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:06:57.832600 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:06:57.832610 | orchestrator | 2025-10-09 10:06:57.832621 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-10-09 10:06:57.832632 | orchestrator | Thursday 09 October 2025 10:06:47 +0000 (0:00:01.109) 0:05:24.977 ****** 2025-10-09 10:06:57.832643 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:06:57.832663 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:06:57.832674 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:06:57.832684 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:06:57.832695 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:06:57.832705 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:06:57.832716 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:57.832727 | orchestrator | 2025-10-09 10:06:57.832738 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-10-09 10:06:57.832749 | orchestrator | Thursday 09 October 2025 10:06:50 +0000 (0:00:02.848) 0:05:27.826 ****** 2025-10-09 10:06:57.832761 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-10-09 10:06:57.832771 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-10-09 10:06:57.832782 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-10-09 10:06:57.832793 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-10-09 10:06:57.832804 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-10-09 10:06:57.832814 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-10-09 10:06:57.832825 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:06:57.832835 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-10-09 10:06:57.832846 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-10-09 10:06:57.832857 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-10-09 10:06:57.832868 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:06:57.832878 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-10-09 10:06:57.832889 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-10-09 10:06:57.832899 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-10-09 10:06:57.832910 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:06:57.832921 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-10-09 10:06:57.832931 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-10-09 10:06:57.832942 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-10-09 10:06:57.832953 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:06:57.832963 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-10-09 10:06:57.832974 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-10-09 10:06:57.832985 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-10-09 10:06:57.832995 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:06:57.833006 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:06:57.833017 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-10-09 10:06:57.833028 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-10-09 10:06:57.833039 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-10-09 10:06:57.833049 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:06:57.833060 | orchestrator | 2025-10-09 10:06:57.833071 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-10-09 10:06:57.833082 | orchestrator | Thursday 09 October 2025 10:06:51 +0000 (0:00:00.686) 0:05:28.512 ****** 2025-10-09 10:06:57.833092 | orchestrator | ok: [testbed-manager] 2025-10-09 10:06:57.833103 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:06:57.833114 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:06:57.833124 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:06:57.833135 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:06:57.833145 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:06:57.833156 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:06:57.833167 | orchestrator | 2025-10-09 10:06:57.833185 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-10-09 10:07:54.490689 | orchestrator | Thursday 09 October 2025 10:06:57 +0000 (0:00:06.421) 0:05:34.934 ****** 2025-10-09 10:07:54.490819 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:07:54.490886 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:07:54.490926 | orchestrator | ok: [testbed-manager] 2025-10-09 10:07:54.490940 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:07:54.490951 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:07:54.490962 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:07:54.490973 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:07:54.490984 | orchestrator | 2025-10-09 10:07:54.490997 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-10-09 10:07:54.491008 | orchestrator | Thursday 09 October 2025 10:06:59 +0000 (0:00:01.323) 0:05:36.258 ****** 2025-10-09 10:07:54.491019 | orchestrator | ok: [testbed-manager] 2025-10-09 10:07:54.491029 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:07:54.491040 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:07:54.491052 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:07:54.491063 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:07:54.491088 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:07:54.491100 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:07:54.491111 | orchestrator | 2025-10-09 10:07:54.491122 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-10-09 10:07:54.491133 | orchestrator | Thursday 09 October 2025 10:07:07 +0000 (0:00:08.201) 0:05:44.459 ****** 2025-10-09 10:07:54.491143 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:07:54.491154 | orchestrator | changed: [testbed-manager] 2025-10-09 10:07:54.491165 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:07:54.491176 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:07:54.491187 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:07:54.491197 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:07:54.491208 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:07:54.491219 | orchestrator | 2025-10-09 10:07:54.491232 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-10-09 10:07:54.491246 | orchestrator | Thursday 09 October 2025 10:07:10 +0000 (0:00:03.606) 0:05:48.066 ****** 2025-10-09 10:07:54.491258 | orchestrator | ok: [testbed-manager] 2025-10-09 10:07:54.491271 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:07:54.491285 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:07:54.491297 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:07:54.491310 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:07:54.491323 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:07:54.491336 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:07:54.491349 | orchestrator | 2025-10-09 10:07:54.491361 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-10-09 10:07:54.491373 | orchestrator | Thursday 09 October 2025 10:07:12 +0000 (0:00:01.394) 0:05:49.461 ****** 2025-10-09 10:07:54.491387 | orchestrator | ok: [testbed-manager] 2025-10-09 10:07:54.491424 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:07:54.491437 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:07:54.491450 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:07:54.491462 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:07:54.491475 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:07:54.491487 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:07:54.491499 | orchestrator | 2025-10-09 10:07:54.491511 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-10-09 10:07:54.491524 | orchestrator | Thursday 09 October 2025 10:07:13 +0000 (0:00:01.649) 0:05:51.110 ****** 2025-10-09 10:07:54.491536 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:07:54.491549 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:07:54.491561 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:07:54.491573 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:07:54.491585 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:07:54.491596 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:07:54.491606 | orchestrator | changed: [testbed-manager] 2025-10-09 10:07:54.491617 | orchestrator | 2025-10-09 10:07:54.491628 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-10-09 10:07:54.491639 | orchestrator | Thursday 09 October 2025 10:07:14 +0000 (0:00:00.659) 0:05:51.770 ****** 2025-10-09 10:07:54.491659 | orchestrator | ok: [testbed-manager] 2025-10-09 10:07:54.491669 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:07:54.491680 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:07:54.491691 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:07:54.491702 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:07:54.491713 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:07:54.491724 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:07:54.491735 | orchestrator | 2025-10-09 10:07:54.491746 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-10-09 10:07:54.491757 | orchestrator | Thursday 09 October 2025 10:07:24 +0000 (0:00:10.005) 0:06:01.775 ****** 2025-10-09 10:07:54.491767 | orchestrator | changed: [testbed-manager] 2025-10-09 10:07:54.491778 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:07:54.491789 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:07:54.491799 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:07:54.491810 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:07:54.491821 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:07:54.491832 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:07:54.491842 | orchestrator | 2025-10-09 10:07:54.491854 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-10-09 10:07:54.491864 | orchestrator | Thursday 09 October 2025 10:07:25 +0000 (0:00:00.937) 0:06:02.713 ****** 2025-10-09 10:07:54.491875 | orchestrator | ok: [testbed-manager] 2025-10-09 10:07:54.491886 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:07:54.491897 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:07:54.491908 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:07:54.491918 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:07:54.491929 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:07:54.491940 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:07:54.491951 | orchestrator | 2025-10-09 10:07:54.491962 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-10-09 10:07:54.491972 | orchestrator | Thursday 09 October 2025 10:07:35 +0000 (0:00:09.504) 0:06:12.217 ****** 2025-10-09 10:07:54.491983 | orchestrator | ok: [testbed-manager] 2025-10-09 10:07:54.491994 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:07:54.492005 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:07:54.492016 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:07:54.492027 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:07:54.492037 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:07:54.492065 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:07:54.492077 | orchestrator | 2025-10-09 10:07:54.492088 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-10-09 10:07:54.492099 | orchestrator | Thursday 09 October 2025 10:07:46 +0000 (0:00:11.612) 0:06:23.830 ****** 2025-10-09 10:07:54.492110 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-10-09 10:07:54.492121 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-10-09 10:07:54.492132 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-10-09 10:07:54.492143 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-10-09 10:07:54.492154 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-10-09 10:07:54.492165 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-10-09 10:07:54.492175 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-10-09 10:07:54.492186 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-10-09 10:07:54.492197 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-10-09 10:07:54.492213 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-10-09 10:07:54.492224 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-10-09 10:07:54.492234 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-10-09 10:07:54.492245 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-10-09 10:07:54.492256 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-10-09 10:07:54.492273 | orchestrator | 2025-10-09 10:07:54.492284 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-10-09 10:07:54.492295 | orchestrator | Thursday 09 October 2025 10:07:48 +0000 (0:00:01.372) 0:06:25.203 ****** 2025-10-09 10:07:54.492305 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:07:54.492316 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:07:54.492327 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:07:54.492338 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:07:54.492348 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:07:54.492359 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:07:54.492370 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:07:54.492381 | orchestrator | 2025-10-09 10:07:54.492392 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-10-09 10:07:54.492430 | orchestrator | Thursday 09 October 2025 10:07:48 +0000 (0:00:00.638) 0:06:25.841 ****** 2025-10-09 10:07:54.492442 | orchestrator | ok: [testbed-manager] 2025-10-09 10:07:54.492452 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:07:54.492463 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:07:54.492474 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:07:54.492485 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:07:54.492495 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:07:54.492506 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:07:54.492516 | orchestrator | 2025-10-09 10:07:54.492527 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-10-09 10:07:54.492539 | orchestrator | Thursday 09 October 2025 10:07:52 +0000 (0:00:03.817) 0:06:29.658 ****** 2025-10-09 10:07:54.492550 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:07:54.492561 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:07:54.492571 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:07:54.492582 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:07:54.492593 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:07:54.492603 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:07:54.492614 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:07:54.492624 | orchestrator | 2025-10-09 10:07:54.492636 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-10-09 10:07:54.492647 | orchestrator | Thursday 09 October 2025 10:07:53 +0000 (0:00:00.547) 0:06:30.206 ****** 2025-10-09 10:07:54.492658 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-10-09 10:07:54.492668 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-10-09 10:07:54.492679 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:07:54.492690 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-10-09 10:07:54.492701 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-10-09 10:07:54.492712 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:07:54.492722 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-10-09 10:07:54.492733 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-10-09 10:07:54.492744 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:07:54.492754 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-10-09 10:07:54.492765 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-10-09 10:07:54.492776 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:07:54.492787 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-10-09 10:07:54.492797 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-10-09 10:07:54.492808 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:07:54.492818 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-10-09 10:07:54.492829 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-10-09 10:07:54.492840 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:07:54.492851 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-10-09 10:07:54.492861 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-10-09 10:07:54.492879 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:07:54.492890 | orchestrator | 2025-10-09 10:07:54.492901 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-10-09 10:07:54.492912 | orchestrator | Thursday 09 October 2025 10:07:53 +0000 (0:00:00.788) 0:06:30.995 ****** 2025-10-09 10:07:54.492923 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:07:54.492933 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:07:54.492944 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:07:54.492955 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:07:54.492966 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:07:54.492977 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:07:54.492987 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:07:54.492998 | orchestrator | 2025-10-09 10:07:54.493015 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-10-09 10:08:16.168276 | orchestrator | Thursday 09 October 2025 10:07:54 +0000 (0:00:00.602) 0:06:31.598 ****** 2025-10-09 10:08:16.168432 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:08:16.168470 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:08:16.168493 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:08:16.168506 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:08:16.168517 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:08:16.168527 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:08:16.168538 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:08:16.168550 | orchestrator | 2025-10-09 10:08:16.168563 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-10-09 10:08:16.168574 | orchestrator | Thursday 09 October 2025 10:07:55 +0000 (0:00:00.555) 0:06:32.153 ****** 2025-10-09 10:08:16.168585 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:08:16.168596 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:08:16.168607 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:08:16.168618 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:08:16.168628 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:08:16.168639 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:08:16.168650 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:08:16.168661 | orchestrator | 2025-10-09 10:08:16.168672 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-10-09 10:08:16.168683 | orchestrator | Thursday 09 October 2025 10:07:55 +0000 (0:00:00.604) 0:06:32.757 ****** 2025-10-09 10:08:16.168694 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:16.168705 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:08:16.168716 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:08:16.168727 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:08:16.168738 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:08:16.168748 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:08:16.168759 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:08:16.168770 | orchestrator | 2025-10-09 10:08:16.168781 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-10-09 10:08:16.168791 | orchestrator | Thursday 09 October 2025 10:07:57 +0000 (0:00:01.725) 0:06:34.483 ****** 2025-10-09 10:08:16.168804 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:08:16.168819 | orchestrator | 2025-10-09 10:08:16.168878 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-10-09 10:08:16.168893 | orchestrator | Thursday 09 October 2025 10:07:58 +0000 (0:00:01.130) 0:06:35.614 ****** 2025-10-09 10:08:16.168906 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:16.168919 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:08:16.168931 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:08:16.168944 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:08:16.168956 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:08:16.168969 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:08:16.169003 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:08:16.169016 | orchestrator | 2025-10-09 10:08:16.169029 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-10-09 10:08:16.169042 | orchestrator | Thursday 09 October 2025 10:07:59 +0000 (0:00:00.893) 0:06:36.507 ****** 2025-10-09 10:08:16.169053 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:16.169066 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:08:16.169079 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:08:16.169091 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:08:16.169103 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:08:16.169115 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:08:16.169126 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:08:16.169137 | orchestrator | 2025-10-09 10:08:16.169148 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-10-09 10:08:16.169159 | orchestrator | Thursday 09 October 2025 10:08:00 +0000 (0:00:00.819) 0:06:37.327 ****** 2025-10-09 10:08:16.169169 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:16.169180 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:08:16.169191 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:08:16.169201 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:08:16.169212 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:08:16.169222 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:08:16.169233 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:08:16.169243 | orchestrator | 2025-10-09 10:08:16.169254 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-10-09 10:08:16.169266 | orchestrator | Thursday 09 October 2025 10:08:01 +0000 (0:00:01.582) 0:06:38.909 ****** 2025-10-09 10:08:16.169276 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:08:16.169287 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:08:16.169297 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:08:16.169308 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:08:16.169319 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:08:16.169329 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:08:16.169340 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:08:16.169350 | orchestrator | 2025-10-09 10:08:16.169361 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-10-09 10:08:16.169372 | orchestrator | Thursday 09 October 2025 10:08:03 +0000 (0:00:01.370) 0:06:40.279 ****** 2025-10-09 10:08:16.169383 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:16.169413 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:08:16.169425 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:08:16.169435 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:08:16.169446 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:08:16.169456 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:08:16.169467 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:08:16.169478 | orchestrator | 2025-10-09 10:08:16.169488 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-10-09 10:08:16.169499 | orchestrator | Thursday 09 October 2025 10:08:04 +0000 (0:00:01.314) 0:06:41.594 ****** 2025-10-09 10:08:16.169510 | orchestrator | changed: [testbed-manager] 2025-10-09 10:08:16.169521 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:08:16.169531 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:08:16.169542 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:08:16.169553 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:08:16.169564 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:08:16.169575 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:08:16.169585 | orchestrator | 2025-10-09 10:08:16.169614 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-10-09 10:08:16.169625 | orchestrator | Thursday 09 October 2025 10:08:05 +0000 (0:00:01.445) 0:06:43.039 ****** 2025-10-09 10:08:16.169636 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:08:16.169656 | orchestrator | 2025-10-09 10:08:16.169667 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-10-09 10:08:16.169678 | orchestrator | Thursday 09 October 2025 10:08:07 +0000 (0:00:01.300) 0:06:44.339 ****** 2025-10-09 10:08:16.169689 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:08:16.169699 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:08:16.169710 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:16.169721 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:08:16.169731 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:08:16.169742 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:08:16.169753 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:08:16.169763 | orchestrator | 2025-10-09 10:08:16.169774 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-10-09 10:08:16.169785 | orchestrator | Thursday 09 October 2025 10:08:08 +0000 (0:00:01.379) 0:06:45.719 ****** 2025-10-09 10:08:16.169796 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:16.169807 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:08:16.169817 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:08:16.169828 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:08:16.169838 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:08:16.169849 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:08:16.169859 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:08:16.169870 | orchestrator | 2025-10-09 10:08:16.169881 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-10-09 10:08:16.169891 | orchestrator | Thursday 09 October 2025 10:08:09 +0000 (0:00:01.238) 0:06:46.957 ****** 2025-10-09 10:08:16.169902 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:16.169913 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:08:16.169924 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:08:16.169934 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:08:16.169945 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:08:16.169955 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:08:16.169966 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:08:16.169977 | orchestrator | 2025-10-09 10:08:16.169987 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-10-09 10:08:16.169998 | orchestrator | Thursday 09 October 2025 10:08:10 +0000 (0:00:01.141) 0:06:48.098 ****** 2025-10-09 10:08:16.170009 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:16.170072 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:08:16.170084 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:08:16.170095 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:08:16.170106 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:08:16.170116 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:08:16.170127 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:08:16.170138 | orchestrator | 2025-10-09 10:08:16.170149 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-10-09 10:08:16.170160 | orchestrator | Thursday 09 October 2025 10:08:12 +0000 (0:00:01.370) 0:06:49.468 ****** 2025-10-09 10:08:16.170170 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:08:16.170181 | orchestrator | 2025-10-09 10:08:16.170192 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-10-09 10:08:16.170203 | orchestrator | Thursday 09 October 2025 10:08:13 +0000 (0:00:00.960) 0:06:50.429 ****** 2025-10-09 10:08:16.170213 | orchestrator | 2025-10-09 10:08:16.170224 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-10-09 10:08:16.170235 | orchestrator | Thursday 09 October 2025 10:08:13 +0000 (0:00:00.043) 0:06:50.473 ****** 2025-10-09 10:08:16.170245 | orchestrator | 2025-10-09 10:08:16.170256 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-10-09 10:08:16.170267 | orchestrator | Thursday 09 October 2025 10:08:13 +0000 (0:00:00.049) 0:06:50.523 ****** 2025-10-09 10:08:16.170277 | orchestrator | 2025-10-09 10:08:16.170288 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-10-09 10:08:16.170311 | orchestrator | Thursday 09 October 2025 10:08:13 +0000 (0:00:00.053) 0:06:50.577 ****** 2025-10-09 10:08:16.170322 | orchestrator | 2025-10-09 10:08:16.170332 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-10-09 10:08:16.170343 | orchestrator | Thursday 09 October 2025 10:08:13 +0000 (0:00:00.041) 0:06:50.618 ****** 2025-10-09 10:08:16.170354 | orchestrator | 2025-10-09 10:08:16.170364 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-10-09 10:08:16.170375 | orchestrator | Thursday 09 October 2025 10:08:13 +0000 (0:00:00.044) 0:06:50.663 ****** 2025-10-09 10:08:16.170385 | orchestrator | 2025-10-09 10:08:16.170441 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-10-09 10:08:16.170459 | orchestrator | Thursday 09 October 2025 10:08:13 +0000 (0:00:00.048) 0:06:50.711 ****** 2025-10-09 10:08:16.170476 | orchestrator | 2025-10-09 10:08:16.170495 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-10-09 10:08:16.170513 | orchestrator | Thursday 09 October 2025 10:08:13 +0000 (0:00:00.040) 0:06:50.752 ****** 2025-10-09 10:08:16.170529 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:08:16.170540 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:08:16.170550 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:08:16.170561 | orchestrator | 2025-10-09 10:08:16.170572 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-10-09 10:08:16.170583 | orchestrator | Thursday 09 October 2025 10:08:14 +0000 (0:00:01.125) 0:06:51.877 ****** 2025-10-09 10:08:16.170594 | orchestrator | changed: [testbed-manager] 2025-10-09 10:08:16.170605 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:08:16.170615 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:08:16.170626 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:08:16.170637 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:08:16.170657 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:08:45.012671 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:08:45.012787 | orchestrator | 2025-10-09 10:08:45.012806 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-10-09 10:08:45.012820 | orchestrator | Thursday 09 October 2025 10:08:16 +0000 (0:00:01.390) 0:06:53.268 ****** 2025-10-09 10:08:45.012832 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:08:45.012843 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:08:45.012854 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:08:45.012865 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:08:45.012876 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:08:45.012887 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:08:45.012898 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:08:45.012909 | orchestrator | 2025-10-09 10:08:45.012921 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-10-09 10:08:45.012932 | orchestrator | Thursday 09 October 2025 10:08:18 +0000 (0:00:02.781) 0:06:56.049 ****** 2025-10-09 10:08:45.012959 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:08:45.012971 | orchestrator | 2025-10-09 10:08:45.012982 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-10-09 10:08:45.012993 | orchestrator | Thursday 09 October 2025 10:08:19 +0000 (0:00:00.103) 0:06:56.152 ****** 2025-10-09 10:08:45.013004 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:45.013016 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:08:45.013027 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:08:45.013038 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:08:45.013049 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:08:45.013060 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:08:45.013070 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:08:45.013081 | orchestrator | 2025-10-09 10:08:45.013092 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-10-09 10:08:45.013104 | orchestrator | Thursday 09 October 2025 10:08:20 +0000 (0:00:01.018) 0:06:57.171 ****** 2025-10-09 10:08:45.013115 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:08:45.013150 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:08:45.013161 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:08:45.013172 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:08:45.013183 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:08:45.013194 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:08:45.013207 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:08:45.013219 | orchestrator | 2025-10-09 10:08:45.013231 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-10-09 10:08:45.013244 | orchestrator | Thursday 09 October 2025 10:08:20 +0000 (0:00:00.595) 0:06:57.767 ****** 2025-10-09 10:08:45.013258 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:08:45.013273 | orchestrator | 2025-10-09 10:08:45.013287 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-10-09 10:08:45.013300 | orchestrator | Thursday 09 October 2025 10:08:21 +0000 (0:00:01.144) 0:06:58.912 ****** 2025-10-09 10:08:45.013312 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:45.013325 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:08:45.013339 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:08:45.013352 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:08:45.013365 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:08:45.013378 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:08:45.013416 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:08:45.013429 | orchestrator | 2025-10-09 10:08:45.013442 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-10-09 10:08:45.013454 | orchestrator | Thursday 09 October 2025 10:08:22 +0000 (0:00:00.904) 0:06:59.816 ****** 2025-10-09 10:08:45.013467 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-10-09 10:08:45.013480 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-10-09 10:08:45.013493 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-10-09 10:08:45.013506 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-10-09 10:08:45.013519 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-10-09 10:08:45.013531 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-10-09 10:08:45.013544 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-10-09 10:08:45.013556 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-10-09 10:08:45.013568 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-10-09 10:08:45.013578 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-10-09 10:08:45.013589 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-10-09 10:08:45.013599 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-10-09 10:08:45.013610 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-10-09 10:08:45.013620 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-10-09 10:08:45.013631 | orchestrator | 2025-10-09 10:08:45.013642 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-10-09 10:08:45.013653 | orchestrator | Thursday 09 October 2025 10:08:25 +0000 (0:00:02.529) 0:07:02.346 ****** 2025-10-09 10:08:45.013663 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:08:45.013674 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:08:45.013685 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:08:45.013695 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:08:45.013706 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:08:45.013716 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:08:45.013727 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:08:45.013737 | orchestrator | 2025-10-09 10:08:45.013749 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-10-09 10:08:45.013760 | orchestrator | Thursday 09 October 2025 10:08:25 +0000 (0:00:00.567) 0:07:02.913 ****** 2025-10-09 10:08:45.013798 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:08:45.013812 | orchestrator | 2025-10-09 10:08:45.013823 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-10-09 10:08:45.013834 | orchestrator | Thursday 09 October 2025 10:08:26 +0000 (0:00:01.182) 0:07:04.096 ****** 2025-10-09 10:08:45.013845 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:45.013856 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:08:45.013866 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:08:45.013877 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:08:45.013888 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:08:45.013899 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:08:45.013909 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:08:45.013920 | orchestrator | 2025-10-09 10:08:45.013931 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-10-09 10:08:45.013948 | orchestrator | Thursday 09 October 2025 10:08:27 +0000 (0:00:00.888) 0:07:04.984 ****** 2025-10-09 10:08:45.013959 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:45.013970 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:08:45.013980 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:08:45.013991 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:08:45.014002 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:08:45.014012 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:08:45.014086 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:08:45.014098 | orchestrator | 2025-10-09 10:08:45.014108 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-10-09 10:08:45.014119 | orchestrator | Thursday 09 October 2025 10:08:28 +0000 (0:00:00.843) 0:07:05.828 ****** 2025-10-09 10:08:45.014130 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:08:45.014141 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:08:45.014151 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:08:45.014162 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:08:45.014173 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:08:45.014183 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:08:45.014194 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:08:45.014204 | orchestrator | 2025-10-09 10:08:45.014215 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-10-09 10:08:45.014226 | orchestrator | Thursday 09 October 2025 10:08:29 +0000 (0:00:00.822) 0:07:06.650 ****** 2025-10-09 10:08:45.014237 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:45.014247 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:08:45.014258 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:08:45.014268 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:08:45.014279 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:08:45.014289 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:08:45.014300 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:08:45.014310 | orchestrator | 2025-10-09 10:08:45.014321 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-10-09 10:08:45.014332 | orchestrator | Thursday 09 October 2025 10:08:31 +0000 (0:00:01.492) 0:07:08.143 ****** 2025-10-09 10:08:45.014343 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:08:45.014354 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:08:45.014365 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:08:45.014375 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:08:45.014412 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:08:45.014424 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:08:45.014435 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:08:45.014445 | orchestrator | 2025-10-09 10:08:45.014457 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-10-09 10:08:45.014468 | orchestrator | Thursday 09 October 2025 10:08:31 +0000 (0:00:00.541) 0:07:08.684 ****** 2025-10-09 10:08:45.014478 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:45.014497 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:08:45.014508 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:08:45.014518 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:08:45.014529 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:08:45.014540 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:08:45.014550 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:08:45.014561 | orchestrator | 2025-10-09 10:08:45.014572 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-10-09 10:08:45.014583 | orchestrator | Thursday 09 October 2025 10:08:38 +0000 (0:00:07.390) 0:07:16.075 ****** 2025-10-09 10:08:45.014593 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:45.014604 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:08:45.014614 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:08:45.014625 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:08:45.014636 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:08:45.014646 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:08:45.014657 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:08:45.014667 | orchestrator | 2025-10-09 10:08:45.014678 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-10-09 10:08:45.014689 | orchestrator | Thursday 09 October 2025 10:08:40 +0000 (0:00:01.389) 0:07:17.465 ****** 2025-10-09 10:08:45.014699 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:45.014710 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:08:45.014720 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:08:45.014731 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:08:45.014742 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:08:45.014752 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:08:45.014762 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:08:45.014773 | orchestrator | 2025-10-09 10:08:45.014784 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-10-09 10:08:45.014795 | orchestrator | Thursday 09 October 2025 10:08:42 +0000 (0:00:01.940) 0:07:19.406 ****** 2025-10-09 10:08:45.014805 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:45.014816 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:08:45.014826 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:08:45.014837 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:08:45.014848 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:08:45.014859 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:08:45.014869 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:08:45.014880 | orchestrator | 2025-10-09 10:08:45.014891 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-10-09 10:08:45.014901 | orchestrator | Thursday 09 October 2025 10:08:44 +0000 (0:00:01.823) 0:07:21.230 ****** 2025-10-09 10:08:45.014912 | orchestrator | ok: [testbed-manager] 2025-10-09 10:08:45.014923 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:08:45.014933 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:08:45.014944 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:08:45.014962 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:09:17.606347 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:09:17.606489 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:09:17.606505 | orchestrator | 2025-10-09 10:09:17.606519 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-10-09 10:09:17.606532 | orchestrator | Thursday 09 October 2025 10:08:44 +0000 (0:00:00.885) 0:07:22.115 ****** 2025-10-09 10:09:17.606543 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:09:17.606555 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:09:17.606565 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:09:17.606576 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:09:17.606587 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:09:17.606598 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:09:17.606609 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:09:17.606620 | orchestrator | 2025-10-09 10:09:17.606631 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-10-09 10:09:17.606683 | orchestrator | Thursday 09 October 2025 10:08:46 +0000 (0:00:01.079) 0:07:23.195 ****** 2025-10-09 10:09:17.606696 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:09:17.606707 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:09:17.606718 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:09:17.606728 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:09:17.606739 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:09:17.606749 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:09:17.606760 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:09:17.606771 | orchestrator | 2025-10-09 10:09:17.606782 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-10-09 10:09:17.606792 | orchestrator | Thursday 09 October 2025 10:08:46 +0000 (0:00:00.598) 0:07:23.793 ****** 2025-10-09 10:09:17.606803 | orchestrator | ok: [testbed-manager] 2025-10-09 10:09:17.606814 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:09:17.606824 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:09:17.606835 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:09:17.606845 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:09:17.606856 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:09:17.606867 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:09:17.606880 | orchestrator | 2025-10-09 10:09:17.606893 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-10-09 10:09:17.606907 | orchestrator | Thursday 09 October 2025 10:08:47 +0000 (0:00:00.559) 0:07:24.352 ****** 2025-10-09 10:09:17.606920 | orchestrator | ok: [testbed-manager] 2025-10-09 10:09:17.606933 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:09:17.606945 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:09:17.606958 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:09:17.606970 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:09:17.606983 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:09:17.606995 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:09:17.607008 | orchestrator | 2025-10-09 10:09:17.607021 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-10-09 10:09:17.607033 | orchestrator | Thursday 09 October 2025 10:08:47 +0000 (0:00:00.553) 0:07:24.905 ****** 2025-10-09 10:09:17.607045 | orchestrator | ok: [testbed-manager] 2025-10-09 10:09:17.607058 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:09:17.607071 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:09:17.607083 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:09:17.607094 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:09:17.607107 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:09:17.607118 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:09:17.607130 | orchestrator | 2025-10-09 10:09:17.607143 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-10-09 10:09:17.607156 | orchestrator | Thursday 09 October 2025 10:08:48 +0000 (0:00:00.563) 0:07:25.469 ****** 2025-10-09 10:09:17.607168 | orchestrator | ok: [testbed-manager] 2025-10-09 10:09:17.607180 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:09:17.607192 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:09:17.607205 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:09:17.607218 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:09:17.607231 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:09:17.607243 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:09:17.607255 | orchestrator | 2025-10-09 10:09:17.607266 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-10-09 10:09:17.607277 | orchestrator | Thursday 09 October 2025 10:08:54 +0000 (0:00:06.058) 0:07:31.527 ****** 2025-10-09 10:09:17.607288 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:09:17.607298 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:09:17.607310 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:09:17.607320 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:09:17.607331 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:09:17.607341 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:09:17.607352 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:09:17.607363 | orchestrator | 2025-10-09 10:09:17.607374 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-10-09 10:09:17.607415 | orchestrator | Thursday 09 October 2025 10:08:54 +0000 (0:00:00.558) 0:07:32.086 ****** 2025-10-09 10:09:17.607429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:09:17.607443 | orchestrator | 2025-10-09 10:09:17.607454 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-10-09 10:09:17.607464 | orchestrator | Thursday 09 October 2025 10:08:55 +0000 (0:00:00.891) 0:07:32.977 ****** 2025-10-09 10:09:17.607475 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:09:17.607486 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:09:17.607496 | orchestrator | ok: [testbed-manager] 2025-10-09 10:09:17.607507 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:09:17.607518 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:09:17.607528 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:09:17.607539 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:09:17.607549 | orchestrator | 2025-10-09 10:09:17.607560 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-10-09 10:09:17.607571 | orchestrator | Thursday 09 October 2025 10:08:57 +0000 (0:00:02.129) 0:07:35.107 ****** 2025-10-09 10:09:17.607582 | orchestrator | ok: [testbed-manager] 2025-10-09 10:09:17.607592 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:09:17.607603 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:09:17.607613 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:09:17.607624 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:09:17.607634 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:09:17.607645 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:09:17.607655 | orchestrator | 2025-10-09 10:09:17.607688 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-10-09 10:09:17.607701 | orchestrator | Thursday 09 October 2025 10:08:59 +0000 (0:00:01.202) 0:07:36.309 ****** 2025-10-09 10:09:17.607711 | orchestrator | ok: [testbed-manager] 2025-10-09 10:09:17.607722 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:09:17.607733 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:09:17.607743 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:09:17.607754 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:09:17.607764 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:09:17.607775 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:09:17.607785 | orchestrator | 2025-10-09 10:09:17.607796 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-10-09 10:09:17.607807 | orchestrator | Thursday 09 October 2025 10:09:00 +0000 (0:00:00.876) 0:07:37.186 ****** 2025-10-09 10:09:17.607818 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-10-09 10:09:17.607831 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-10-09 10:09:17.607842 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-10-09 10:09:17.607853 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-10-09 10:09:17.607863 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-10-09 10:09:17.607874 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-10-09 10:09:17.607885 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-10-09 10:09:17.607896 | orchestrator | 2025-10-09 10:09:17.607907 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-10-09 10:09:17.607924 | orchestrator | Thursday 09 October 2025 10:09:01 +0000 (0:00:01.815) 0:07:39.002 ****** 2025-10-09 10:09:17.607936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:09:17.607947 | orchestrator | 2025-10-09 10:09:17.607958 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-10-09 10:09:17.607969 | orchestrator | Thursday 09 October 2025 10:09:02 +0000 (0:00:01.073) 0:07:40.075 ****** 2025-10-09 10:09:17.607979 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:09:17.607990 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:09:17.608001 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:09:17.608012 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:09:17.608029 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:09:17.608041 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:09:17.608052 | orchestrator | changed: [testbed-manager] 2025-10-09 10:09:17.608062 | orchestrator | 2025-10-09 10:09:17.608073 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-10-09 10:09:17.608084 | orchestrator | Thursday 09 October 2025 10:09:12 +0000 (0:00:09.491) 0:07:49.567 ****** 2025-10-09 10:09:17.608095 | orchestrator | ok: [testbed-manager] 2025-10-09 10:09:17.608105 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:09:17.608116 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:09:17.608126 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:09:17.608137 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:09:17.608147 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:09:17.608158 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:09:17.608169 | orchestrator | 2025-10-09 10:09:17.608179 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-10-09 10:09:17.608190 | orchestrator | Thursday 09 October 2025 10:09:14 +0000 (0:00:02.023) 0:07:51.591 ****** 2025-10-09 10:09:17.608201 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:09:17.608211 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:09:17.608222 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:09:17.608232 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:09:17.608243 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:09:17.608253 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:09:17.608264 | orchestrator | 2025-10-09 10:09:17.608275 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-10-09 10:09:17.608285 | orchestrator | Thursday 09 October 2025 10:09:15 +0000 (0:00:01.325) 0:07:52.916 ****** 2025-10-09 10:09:17.608296 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:09:17.608307 | orchestrator | changed: [testbed-manager] 2025-10-09 10:09:17.608318 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:09:17.608329 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:09:17.608339 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:09:17.608350 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:09:17.608360 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:09:17.608371 | orchestrator | 2025-10-09 10:09:17.608403 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-10-09 10:09:17.608416 | orchestrator | 2025-10-09 10:09:17.608426 | orchestrator | TASK [Include hardening role] ************************************************** 2025-10-09 10:09:17.608437 | orchestrator | Thursday 09 October 2025 10:09:17 +0000 (0:00:01.247) 0:07:54.163 ****** 2025-10-09 10:09:17.608448 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:09:17.608458 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:09:17.608469 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:09:17.608480 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:09:17.608491 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:09:17.608501 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:09:17.608519 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:09:45.011936 | orchestrator | 2025-10-09 10:09:45.012040 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-10-09 10:09:45.012095 | orchestrator | 2025-10-09 10:09:45.012105 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-10-09 10:09:45.012114 | orchestrator | Thursday 09 October 2025 10:09:17 +0000 (0:00:00.549) 0:07:54.712 ****** 2025-10-09 10:09:45.012122 | orchestrator | changed: [testbed-manager] 2025-10-09 10:09:45.012130 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:09:45.012138 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:09:45.012145 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:09:45.012152 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:09:45.012159 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:09:45.012166 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:09:45.012173 | orchestrator | 2025-10-09 10:09:45.012180 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-10-09 10:09:45.012193 | orchestrator | Thursday 09 October 2025 10:09:19 +0000 (0:00:01.640) 0:07:56.353 ****** 2025-10-09 10:09:45.012201 | orchestrator | ok: [testbed-manager] 2025-10-09 10:09:45.012209 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:09:45.012216 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:09:45.012223 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:09:45.012231 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:09:45.012238 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:09:45.012245 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:09:45.012252 | orchestrator | 2025-10-09 10:09:45.012259 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-10-09 10:09:45.012266 | orchestrator | Thursday 09 October 2025 10:09:20 +0000 (0:00:01.438) 0:07:57.792 ****** 2025-10-09 10:09:45.012274 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:09:45.012281 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:09:45.012288 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:09:45.012295 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:09:45.012302 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:09:45.012309 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:09:45.012316 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:09:45.012324 | orchestrator | 2025-10-09 10:09:45.012331 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-10-09 10:09:45.012338 | orchestrator | Thursday 09 October 2025 10:09:21 +0000 (0:00:00.514) 0:07:58.306 ****** 2025-10-09 10:09:45.012346 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:09:45.012354 | orchestrator | 2025-10-09 10:09:45.012362 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-10-09 10:09:45.012369 | orchestrator | Thursday 09 October 2025 10:09:22 +0000 (0:00:01.051) 0:07:59.357 ****** 2025-10-09 10:09:45.012377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:09:45.012415 | orchestrator | 2025-10-09 10:09:45.012423 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-10-09 10:09:45.012431 | orchestrator | Thursday 09 October 2025 10:09:23 +0000 (0:00:00.820) 0:08:00.178 ****** 2025-10-09 10:09:45.012438 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:09:45.012445 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:09:45.012452 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:09:45.012461 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:09:45.012469 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:09:45.012478 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:09:45.012486 | orchestrator | changed: [testbed-manager] 2025-10-09 10:09:45.012494 | orchestrator | 2025-10-09 10:09:45.012503 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-10-09 10:09:45.012512 | orchestrator | Thursday 09 October 2025 10:09:31 +0000 (0:00:08.479) 0:08:08.657 ****** 2025-10-09 10:09:45.012520 | orchestrator | changed: [testbed-manager] 2025-10-09 10:09:45.012534 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:09:45.012543 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:09:45.012551 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:09:45.012559 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:09:45.012568 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:09:45.012576 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:09:45.012585 | orchestrator | 2025-10-09 10:09:45.012593 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-10-09 10:09:45.012602 | orchestrator | Thursday 09 October 2025 10:09:32 +0000 (0:00:00.887) 0:08:09.545 ****** 2025-10-09 10:09:45.012610 | orchestrator | changed: [testbed-manager] 2025-10-09 10:09:45.012619 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:09:45.012627 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:09:45.012636 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:09:45.012644 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:09:45.012652 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:09:45.012661 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:09:45.012669 | orchestrator | 2025-10-09 10:09:45.012677 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-10-09 10:09:45.012686 | orchestrator | Thursday 09 October 2025 10:09:34 +0000 (0:00:01.597) 0:08:11.142 ****** 2025-10-09 10:09:45.012694 | orchestrator | changed: [testbed-manager] 2025-10-09 10:09:45.012703 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:09:45.012714 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:09:45.012726 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:09:45.012738 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:09:45.012750 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:09:45.012761 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:09:45.012774 | orchestrator | 2025-10-09 10:09:45.012786 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-10-09 10:09:45.012801 | orchestrator | Thursday 09 October 2025 10:09:35 +0000 (0:00:01.818) 0:08:12.960 ****** 2025-10-09 10:09:45.012813 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:09:45.012826 | orchestrator | changed: [testbed-manager] 2025-10-09 10:09:45.012834 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:09:45.012841 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:09:45.012861 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:09:45.012869 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:09:45.012876 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:09:45.012883 | orchestrator | 2025-10-09 10:09:45.012890 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-10-09 10:09:45.012897 | orchestrator | Thursday 09 October 2025 10:09:37 +0000 (0:00:01.463) 0:08:14.423 ****** 2025-10-09 10:09:45.012904 | orchestrator | changed: [testbed-manager] 2025-10-09 10:09:45.012912 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:09:45.012919 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:09:45.012926 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:09:45.012933 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:09:45.012940 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:09:45.012947 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:09:45.012954 | orchestrator | 2025-10-09 10:09:45.012961 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-10-09 10:09:45.012968 | orchestrator | 2025-10-09 10:09:45.012980 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-10-09 10:09:45.012987 | orchestrator | Thursday 09 October 2025 10:09:38 +0000 (0:00:01.229) 0:08:15.653 ****** 2025-10-09 10:09:45.012995 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:09:45.013002 | orchestrator | 2025-10-09 10:09:45.013009 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-10-09 10:09:45.013016 | orchestrator | Thursday 09 October 2025 10:09:39 +0000 (0:00:00.977) 0:08:16.630 ****** 2025-10-09 10:09:45.013030 | orchestrator | ok: [testbed-manager] 2025-10-09 10:09:45.013037 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:09:45.013044 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:09:45.013051 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:09:45.013058 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:09:45.013066 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:09:45.013073 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:09:45.013080 | orchestrator | 2025-10-09 10:09:45.013087 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-10-09 10:09:45.013095 | orchestrator | Thursday 09 October 2025 10:09:40 +0000 (0:00:00.866) 0:08:17.497 ****** 2025-10-09 10:09:45.013102 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:09:45.013109 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:09:45.013116 | orchestrator | changed: [testbed-manager] 2025-10-09 10:09:45.013123 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:09:45.013130 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:09:45.013137 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:09:45.013144 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:09:45.013151 | orchestrator | 2025-10-09 10:09:45.013158 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-10-09 10:09:45.013166 | orchestrator | Thursday 09 October 2025 10:09:41 +0000 (0:00:01.448) 0:08:18.945 ****** 2025-10-09 10:09:45.013173 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:09:45.013180 | orchestrator | 2025-10-09 10:09:45.013187 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-10-09 10:09:45.013194 | orchestrator | Thursday 09 October 2025 10:09:42 +0000 (0:00:00.895) 0:08:19.840 ****** 2025-10-09 10:09:45.013202 | orchestrator | ok: [testbed-manager] 2025-10-09 10:09:45.013209 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:09:45.013216 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:09:45.013223 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:09:45.013230 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:09:45.013237 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:09:45.013244 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:09:45.013251 | orchestrator | 2025-10-09 10:09:45.013259 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-10-09 10:09:45.013266 | orchestrator | Thursday 09 October 2025 10:09:43 +0000 (0:00:00.846) 0:08:20.686 ****** 2025-10-09 10:09:45.013273 | orchestrator | changed: [testbed-manager] 2025-10-09 10:09:45.013280 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:09:45.013288 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:09:45.013300 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:09:45.013312 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:09:45.013324 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:09:45.013343 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:09:45.013356 | orchestrator | 2025-10-09 10:09:45.013368 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:09:45.013402 | orchestrator | testbed-manager : ok=164  changed=38  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2025-10-09 10:09:45.013416 | orchestrator | testbed-node-0 : ok=173  changed=67  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-10-09 10:09:45.013428 | orchestrator | testbed-node-1 : ok=173  changed=67  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-10-09 10:09:45.013441 | orchestrator | testbed-node-2 : ok=173  changed=67  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-10-09 10:09:45.013453 | orchestrator | testbed-node-3 : ok=171  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-10-09 10:09:45.013468 | orchestrator | testbed-node-4 : ok=171  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-10-09 10:09:45.013475 | orchestrator | testbed-node-5 : ok=171  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-10-09 10:09:45.013482 | orchestrator | 2025-10-09 10:09:45.013490 | orchestrator | 2025-10-09 10:09:45.013503 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:09:45.562267 | orchestrator | Thursday 09 October 2025 10:09:44 +0000 (0:00:01.423) 0:08:22.110 ****** 2025-10-09 10:09:45.562421 | orchestrator | =============================================================================== 2025-10-09 10:09:45.562438 | orchestrator | osism.commons.packages : Install required packages --------------------- 79.93s 2025-10-09 10:09:45.562450 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.40s 2025-10-09 10:09:45.562462 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.83s 2025-10-09 10:09:45.562472 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.88s 2025-10-09 10:09:45.562506 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.47s 2025-10-09 10:09:45.562518 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.27s 2025-10-09 10:09:45.562530 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.61s 2025-10-09 10:09:45.562541 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.01s 2025-10-09 10:09:45.562552 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.50s 2025-10-09 10:09:45.562563 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.49s 2025-10-09 10:09:45.562573 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.48s 2025-10-09 10:09:45.562584 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.47s 2025-10-09 10:09:45.562595 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.20s 2025-10-09 10:09:45.562606 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.04s 2025-10-09 10:09:45.562617 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.84s 2025-10-09 10:09:45.562628 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.39s 2025-10-09 10:09:45.562638 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.42s 2025-10-09 10:09:45.562649 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 6.06s 2025-10-09 10:09:45.562660 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.92s 2025-10-09 10:09:45.562671 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.90s 2025-10-09 10:09:45.907625 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-10-09 10:09:45.907719 | orchestrator | + osism apply network 2025-10-09 10:09:59.151648 | orchestrator | 2025-10-09 10:09:59 | INFO  | Task f65960ab-ee58-4a28-90de-25baea53419f (network) was prepared for execution. 2025-10-09 10:09:59.151762 | orchestrator | 2025-10-09 10:09:59 | INFO  | It takes a moment until task f65960ab-ee58-4a28-90de-25baea53419f (network) has been started and output is visible here. 2025-10-09 10:10:30.159632 | orchestrator | 2025-10-09 10:10:30.159752 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-10-09 10:10:30.159769 | orchestrator | 2025-10-09 10:10:30.159782 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-10-09 10:10:30.159793 | orchestrator | Thursday 09 October 2025 10:10:03 +0000 (0:00:00.280) 0:00:00.280 ****** 2025-10-09 10:10:30.159805 | orchestrator | ok: [testbed-manager] 2025-10-09 10:10:30.159817 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:10:30.159828 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:10:30.159839 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:10:30.159850 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:10:30.159889 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:10:30.159900 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:10:30.159912 | orchestrator | 2025-10-09 10:10:30.159923 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-10-09 10:10:30.159934 | orchestrator | Thursday 09 October 2025 10:10:04 +0000 (0:00:00.747) 0:00:01.028 ****** 2025-10-09 10:10:30.159947 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:10:30.159960 | orchestrator | 2025-10-09 10:10:30.159972 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-10-09 10:10:30.159982 | orchestrator | Thursday 09 October 2025 10:10:05 +0000 (0:00:01.276) 0:00:02.304 ****** 2025-10-09 10:10:30.159993 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:10:30.160004 | orchestrator | ok: [testbed-manager] 2025-10-09 10:10:30.160014 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:10:30.160025 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:10:30.160036 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:10:30.160046 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:10:30.160057 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:10:30.160067 | orchestrator | 2025-10-09 10:10:30.160078 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-10-09 10:10:30.160089 | orchestrator | Thursday 09 October 2025 10:10:08 +0000 (0:00:02.357) 0:00:04.662 ****** 2025-10-09 10:10:30.160100 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:10:30.160110 | orchestrator | ok: [testbed-manager] 2025-10-09 10:10:30.160121 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:10:30.160132 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:10:30.160142 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:10:30.160153 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:10:30.160163 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:10:30.160174 | orchestrator | 2025-10-09 10:10:30.160185 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-10-09 10:10:30.160196 | orchestrator | Thursday 09 October 2025 10:10:10 +0000 (0:00:01.971) 0:00:06.634 ****** 2025-10-09 10:10:30.160207 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-10-09 10:10:30.160218 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-10-09 10:10:30.160229 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-10-09 10:10:30.160240 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-10-09 10:10:30.160251 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-10-09 10:10:30.160262 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-10-09 10:10:30.160272 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-10-09 10:10:30.160283 | orchestrator | 2025-10-09 10:10:30.160294 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-10-09 10:10:30.160305 | orchestrator | Thursday 09 October 2025 10:10:11 +0000 (0:00:01.164) 0:00:07.799 ****** 2025-10-09 10:10:30.160316 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-10-09 10:10:30.160328 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-10-09 10:10:30.160352 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:10:30.160363 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-10-09 10:10:30.160399 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-10-09 10:10:30.160411 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:10:30.160422 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-10-09 10:10:30.160432 | orchestrator | 2025-10-09 10:10:30.160443 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-10-09 10:10:30.160454 | orchestrator | Thursday 09 October 2025 10:10:15 +0000 (0:00:03.686) 0:00:11.485 ****** 2025-10-09 10:10:30.160465 | orchestrator | changed: [testbed-manager] 2025-10-09 10:10:30.160475 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:10:30.160486 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:10:30.160506 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:10:30.160517 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:10:30.160527 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:10:30.160538 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:10:30.160549 | orchestrator | 2025-10-09 10:10:30.160560 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-10-09 10:10:30.160571 | orchestrator | Thursday 09 October 2025 10:10:16 +0000 (0:00:01.782) 0:00:13.268 ****** 2025-10-09 10:10:30.160582 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:10:30.160592 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:10:30.160603 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-10-09 10:10:30.160614 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-10-09 10:10:30.160624 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-10-09 10:10:30.160635 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-10-09 10:10:30.160646 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-10-09 10:10:30.160656 | orchestrator | 2025-10-09 10:10:30.160667 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-10-09 10:10:30.160678 | orchestrator | Thursday 09 October 2025 10:10:18 +0000 (0:00:01.874) 0:00:15.143 ****** 2025-10-09 10:10:30.160689 | orchestrator | ok: [testbed-manager] 2025-10-09 10:10:30.160699 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:10:30.160710 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:10:30.160721 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:10:30.160732 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:10:30.160742 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:10:30.160753 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:10:30.160763 | orchestrator | 2025-10-09 10:10:30.160774 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-10-09 10:10:30.160802 | orchestrator | Thursday 09 October 2025 10:10:19 +0000 (0:00:01.168) 0:00:16.311 ****** 2025-10-09 10:10:30.160814 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:10:30.160825 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:10:30.160835 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:10:30.160846 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:10:30.160857 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:10:30.160867 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:10:30.160878 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:10:30.160888 | orchestrator | 2025-10-09 10:10:30.160899 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-10-09 10:10:30.160910 | orchestrator | Thursday 09 October 2025 10:10:20 +0000 (0:00:00.684) 0:00:16.995 ****** 2025-10-09 10:10:30.160921 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:10:30.160931 | orchestrator | ok: [testbed-manager] 2025-10-09 10:10:30.160942 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:10:30.160952 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:10:30.160963 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:10:30.160974 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:10:30.160984 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:10:30.160995 | orchestrator | 2025-10-09 10:10:30.161005 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-10-09 10:10:30.161016 | orchestrator | Thursday 09 October 2025 10:10:22 +0000 (0:00:02.259) 0:00:19.255 ****** 2025-10-09 10:10:30.161027 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:10:30.161038 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:10:30.161048 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:10:30.161059 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:10:30.161069 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:10:30.161080 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:10:30.161091 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-10-09 10:10:30.161103 | orchestrator | 2025-10-09 10:10:30.161114 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-10-09 10:10:30.161125 | orchestrator | Thursday 09 October 2025 10:10:23 +0000 (0:00:00.930) 0:00:20.185 ****** 2025-10-09 10:10:30.161145 | orchestrator | ok: [testbed-manager] 2025-10-09 10:10:30.161156 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:10:30.161166 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:10:30.161177 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:10:30.161187 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:10:30.161198 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:10:30.161208 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:10:30.161219 | orchestrator | 2025-10-09 10:10:30.161230 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-10-09 10:10:30.161241 | orchestrator | Thursday 09 October 2025 10:10:25 +0000 (0:00:01.731) 0:00:21.917 ****** 2025-10-09 10:10:30.161252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:10:30.161264 | orchestrator | 2025-10-09 10:10:30.161275 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-10-09 10:10:30.161285 | orchestrator | Thursday 09 October 2025 10:10:26 +0000 (0:00:01.330) 0:00:23.247 ****** 2025-10-09 10:10:30.161296 | orchestrator | ok: [testbed-manager] 2025-10-09 10:10:30.161306 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:10:30.161317 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:10:30.161327 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:10:30.161338 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:10:30.161348 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:10:30.161359 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:10:30.161369 | orchestrator | 2025-10-09 10:10:30.161404 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-10-09 10:10:30.161416 | orchestrator | Thursday 09 October 2025 10:10:28 +0000 (0:00:01.200) 0:00:24.448 ****** 2025-10-09 10:10:30.161427 | orchestrator | ok: [testbed-manager] 2025-10-09 10:10:30.161437 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:10:30.161448 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:10:30.161458 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:10:30.161469 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:10:30.161479 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:10:30.161490 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:10:30.161500 | orchestrator | 2025-10-09 10:10:30.161511 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-10-09 10:10:30.161521 | orchestrator | Thursday 09 October 2025 10:10:28 +0000 (0:00:00.739) 0:00:25.187 ****** 2025-10-09 10:10:30.161532 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-10-09 10:10:30.161543 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-10-09 10:10:30.161553 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-10-09 10:10:30.161564 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-10-09 10:10:30.161574 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-10-09 10:10:30.161585 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-10-09 10:10:30.161595 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-10-09 10:10:30.161606 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-10-09 10:10:30.161616 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-10-09 10:10:30.161627 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-10-09 10:10:30.161638 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-10-09 10:10:30.161648 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-10-09 10:10:30.161659 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-10-09 10:10:30.161669 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-10-09 10:10:30.161687 | orchestrator | 2025-10-09 10:10:30.161704 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-10-09 10:10:48.162687 | orchestrator | Thursday 09 October 2025 10:10:30 +0000 (0:00:01.340) 0:00:26.528 ****** 2025-10-09 10:10:48.162798 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:10:48.162813 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:10:48.162824 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:10:48.162834 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:10:48.162844 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:10:48.162869 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:10:48.162879 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:10:48.162898 | orchestrator | 2025-10-09 10:10:48.162910 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-10-09 10:10:48.162920 | orchestrator | Thursday 09 October 2025 10:10:30 +0000 (0:00:00.670) 0:00:27.198 ****** 2025-10-09 10:10:48.162932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-0, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2025-10-09 10:10:48.162945 | orchestrator | 2025-10-09 10:10:48.162955 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-10-09 10:10:48.162965 | orchestrator | Thursday 09 October 2025 10:10:35 +0000 (0:00:04.981) 0:00:32.180 ****** 2025-10-09 10:10:48.162976 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:10:48.162988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:10:48.162999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:10:48.163009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:10:48.163018 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:10:48.163028 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:10:48.163038 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:10:48.163048 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:10:48.163058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:10:48.163115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:10:48.163126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:10:48.163152 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:10:48.163163 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:10:48.163173 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:10:48.163182 | orchestrator | 2025-10-09 10:10:48.163192 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-10-09 10:10:48.163201 | orchestrator | Thursday 09 October 2025 10:10:41 +0000 (0:00:06.061) 0:00:38.241 ****** 2025-10-09 10:10:48.163211 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:10:48.163222 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:10:48.163234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:10:48.163245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:10:48.163255 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:10:48.163266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:10:48.163281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:10:48.163292 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:10:48.163311 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-10-09 10:10:48.163322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:10:48.163333 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:10:48.163345 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:10:48.163363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:10:55.128473 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-10-09 10:10:55.128586 | orchestrator | 2025-10-09 10:10:55.128605 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-10-09 10:10:55.128619 | orchestrator | Thursday 09 October 2025 10:10:48 +0000 (0:00:06.279) 0:00:44.521 ****** 2025-10-09 10:10:55.128633 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:10:55.128646 | orchestrator | 2025-10-09 10:10:55.128658 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-10-09 10:10:55.128670 | orchestrator | Thursday 09 October 2025 10:10:49 +0000 (0:00:01.480) 0:00:46.001 ****** 2025-10-09 10:10:55.128682 | orchestrator | ok: [testbed-manager] 2025-10-09 10:10:55.128696 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:10:55.128707 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:10:55.128719 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:10:55.128731 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:10:55.128742 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:10:55.128754 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:10:55.128765 | orchestrator | 2025-10-09 10:10:55.128777 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-10-09 10:10:55.128789 | orchestrator | Thursday 09 October 2025 10:10:50 +0000 (0:00:01.316) 0:00:47.318 ****** 2025-10-09 10:10:55.128801 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-10-09 10:10:55.128814 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-10-09 10:10:55.128825 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-10-09 10:10:55.128837 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-10-09 10:10:55.128848 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-10-09 10:10:55.128860 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-10-09 10:10:55.128897 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-10-09 10:10:55.128909 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-10-09 10:10:55.128921 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:10:55.128934 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-10-09 10:10:55.128945 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-10-09 10:10:55.128957 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:10:55.128968 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-10-09 10:10:55.128994 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-10-09 10:10:55.129006 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-10-09 10:10:55.129018 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-10-09 10:10:55.129029 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-10-09 10:10:55.129041 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-10-09 10:10:55.129053 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:10:55.129064 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-10-09 10:10:55.129076 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-10-09 10:10:55.129087 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-10-09 10:10:55.129099 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-10-09 10:10:55.129110 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:10:55.129122 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-10-09 10:10:55.129133 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-10-09 10:10:55.129145 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:10:55.129157 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-10-09 10:10:55.129168 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-10-09 10:10:55.129180 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:10:55.129191 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-10-09 10:10:55.129203 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-10-09 10:10:55.129214 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-10-09 10:10:55.129226 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-10-09 10:10:55.129237 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:10:55.129249 | orchestrator | 2025-10-09 10:10:55.129260 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-10-09 10:10:55.129291 | orchestrator | Thursday 09 October 2025 10:10:53 +0000 (0:00:02.215) 0:00:49.533 ****** 2025-10-09 10:10:55.129302 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:10:55.129313 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:10:55.129324 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:10:55.129335 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:10:55.129346 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:10:55.129356 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:10:55.129367 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:10:55.129405 | orchestrator | 2025-10-09 10:10:55.129417 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-10-09 10:10:55.129428 | orchestrator | Thursday 09 October 2025 10:10:53 +0000 (0:00:00.695) 0:00:50.228 ****** 2025-10-09 10:10:55.129439 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:10:55.129450 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:10:55.129470 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:10:55.129481 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:10:55.129492 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:10:55.129503 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:10:55.129514 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:10:55.129524 | orchestrator | 2025-10-09 10:10:55.129536 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:10:55.129547 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:10:55.129559 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:10:55.129570 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:10:55.129581 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:10:55.129592 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:10:55.129603 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:10:55.129614 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:10:55.129625 | orchestrator | 2025-10-09 10:10:55.129636 | orchestrator | 2025-10-09 10:10:55.129648 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:10:55.129659 | orchestrator | Thursday 09 October 2025 10:10:54 +0000 (0:00:00.825) 0:00:51.054 ****** 2025-10-09 10:10:55.129670 | orchestrator | =============================================================================== 2025-10-09 10:10:55.129681 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.28s 2025-10-09 10:10:55.129697 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.06s 2025-10-09 10:10:55.129708 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.98s 2025-10-09 10:10:55.129719 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.69s 2025-10-09 10:10:55.129730 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.36s 2025-10-09 10:10:55.129741 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.26s 2025-10-09 10:10:55.129752 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.22s 2025-10-09 10:10:55.129763 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.97s 2025-10-09 10:10:55.129774 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.87s 2025-10-09 10:10:55.129784 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.78s 2025-10-09 10:10:55.129795 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.73s 2025-10-09 10:10:55.129806 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.48s 2025-10-09 10:10:55.129817 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.34s 2025-10-09 10:10:55.129828 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.33s 2025-10-09 10:10:55.129839 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.32s 2025-10-09 10:10:55.129850 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.28s 2025-10-09 10:10:55.129861 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.20s 2025-10-09 10:10:55.129878 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.17s 2025-10-09 10:10:55.129889 | orchestrator | osism.commons.network : Create required directories --------------------- 1.16s 2025-10-09 10:10:55.129900 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.93s 2025-10-09 10:10:55.556334 | orchestrator | + osism apply wireguard 2025-10-09 10:11:07.656079 | orchestrator | 2025-10-09 10:11:07 | INFO  | Task bd4a8984-ea7b-4c53-a191-b77779c627c9 (wireguard) was prepared for execution. 2025-10-09 10:11:07.656191 | orchestrator | 2025-10-09 10:11:07 | INFO  | It takes a moment until task bd4a8984-ea7b-4c53-a191-b77779c627c9 (wireguard) has been started and output is visible here. 2025-10-09 10:11:29.043005 | orchestrator | 2025-10-09 10:11:29.043127 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-10-09 10:11:29.043143 | orchestrator | 2025-10-09 10:11:29.043156 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-10-09 10:11:29.043168 | orchestrator | Thursday 09 October 2025 10:11:12 +0000 (0:00:00.231) 0:00:00.231 ****** 2025-10-09 10:11:29.043180 | orchestrator | ok: [testbed-manager] 2025-10-09 10:11:29.043193 | orchestrator | 2025-10-09 10:11:29.043205 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-10-09 10:11:29.043216 | orchestrator | Thursday 09 October 2025 10:11:13 +0000 (0:00:01.657) 0:00:01.889 ****** 2025-10-09 10:11:29.043228 | orchestrator | changed: [testbed-manager] 2025-10-09 10:11:29.043240 | orchestrator | 2025-10-09 10:11:29.043252 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-10-09 10:11:29.043263 | orchestrator | Thursday 09 October 2025 10:11:20 +0000 (0:00:07.134) 0:00:09.024 ****** 2025-10-09 10:11:29.043275 | orchestrator | changed: [testbed-manager] 2025-10-09 10:11:29.043287 | orchestrator | 2025-10-09 10:11:29.043298 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-10-09 10:11:29.043310 | orchestrator | Thursday 09 October 2025 10:11:21 +0000 (0:00:00.564) 0:00:09.588 ****** 2025-10-09 10:11:29.043321 | orchestrator | changed: [testbed-manager] 2025-10-09 10:11:29.043333 | orchestrator | 2025-10-09 10:11:29.043344 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-10-09 10:11:29.043356 | orchestrator | Thursday 09 October 2025 10:11:21 +0000 (0:00:00.452) 0:00:10.041 ****** 2025-10-09 10:11:29.043413 | orchestrator | ok: [testbed-manager] 2025-10-09 10:11:29.043426 | orchestrator | 2025-10-09 10:11:29.043438 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-10-09 10:11:29.043451 | orchestrator | Thursday 09 October 2025 10:11:22 +0000 (0:00:00.711) 0:00:10.752 ****** 2025-10-09 10:11:29.043463 | orchestrator | ok: [testbed-manager] 2025-10-09 10:11:29.043474 | orchestrator | 2025-10-09 10:11:29.043486 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-10-09 10:11:29.043498 | orchestrator | Thursday 09 October 2025 10:11:23 +0000 (0:00:00.444) 0:00:11.196 ****** 2025-10-09 10:11:29.043509 | orchestrator | ok: [testbed-manager] 2025-10-09 10:11:29.043521 | orchestrator | 2025-10-09 10:11:29.043533 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-10-09 10:11:29.043547 | orchestrator | Thursday 09 October 2025 10:11:23 +0000 (0:00:00.440) 0:00:11.637 ****** 2025-10-09 10:11:29.043560 | orchestrator | changed: [testbed-manager] 2025-10-09 10:11:29.043574 | orchestrator | 2025-10-09 10:11:29.043587 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-10-09 10:11:29.043601 | orchestrator | Thursday 09 October 2025 10:11:24 +0000 (0:00:01.245) 0:00:12.882 ****** 2025-10-09 10:11:29.043614 | orchestrator | changed: [testbed-manager] => (item=None) 2025-10-09 10:11:29.043627 | orchestrator | changed: [testbed-manager] 2025-10-09 10:11:29.043640 | orchestrator | 2025-10-09 10:11:29.043654 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-10-09 10:11:29.043667 | orchestrator | Thursday 09 October 2025 10:11:25 +0000 (0:00:01.035) 0:00:13.918 ****** 2025-10-09 10:11:29.043680 | orchestrator | changed: [testbed-manager] 2025-10-09 10:11:29.043721 | orchestrator | 2025-10-09 10:11:29.043734 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-10-09 10:11:29.043762 | orchestrator | Thursday 09 October 2025 10:11:27 +0000 (0:00:01.800) 0:00:15.718 ****** 2025-10-09 10:11:29.043775 | orchestrator | changed: [testbed-manager] 2025-10-09 10:11:29.043788 | orchestrator | 2025-10-09 10:11:29.043801 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:11:29.043815 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:11:29.043829 | orchestrator | 2025-10-09 10:11:29.043843 | orchestrator | 2025-10-09 10:11:29.043856 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:11:29.043870 | orchestrator | Thursday 09 October 2025 10:11:28 +0000 (0:00:01.013) 0:00:16.732 ****** 2025-10-09 10:11:29.043884 | orchestrator | =============================================================================== 2025-10-09 10:11:29.043897 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.13s 2025-10-09 10:11:29.043909 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.80s 2025-10-09 10:11:29.043921 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.66s 2025-10-09 10:11:29.043932 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.25s 2025-10-09 10:11:29.043944 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.04s 2025-10-09 10:11:29.043955 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.01s 2025-10-09 10:11:29.043966 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.71s 2025-10-09 10:11:29.043978 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2025-10-09 10:11:29.043989 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2025-10-09 10:11:29.044000 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.44s 2025-10-09 10:11:29.044012 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2025-10-09 10:11:29.425778 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-10-09 10:11:29.468408 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-10-09 10:11:29.468435 | orchestrator | Dload Upload Total Spent Left Speed 2025-10-09 10:11:29.547547 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 175 0 --:--:-- --:--:-- --:--:-- 177 2025-10-09 10:11:29.561598 | orchestrator | + osism apply --environment custom workarounds 2025-10-09 10:11:31.594118 | orchestrator | 2025-10-09 10:11:31 | INFO  | Trying to run play workarounds in environment custom 2025-10-09 10:11:41.778666 | orchestrator | 2025-10-09 10:11:41 | INFO  | Task ac016786-cbc3-4e39-8383-02b71fcc08c4 (workarounds) was prepared for execution. 2025-10-09 10:11:41.778782 | orchestrator | 2025-10-09 10:11:41 | INFO  | It takes a moment until task ac016786-cbc3-4e39-8383-02b71fcc08c4 (workarounds) has been started and output is visible here. 2025-10-09 10:12:07.959318 | orchestrator | 2025-10-09 10:12:07.959458 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:12:07.959477 | orchestrator | 2025-10-09 10:12:07.959490 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-10-09 10:12:07.959501 | orchestrator | Thursday 09 October 2025 10:11:46 +0000 (0:00:00.170) 0:00:00.170 ****** 2025-10-09 10:12:07.959513 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-10-09 10:12:07.959525 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-10-09 10:12:07.959536 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-10-09 10:12:07.959547 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-10-09 10:12:07.959583 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-10-09 10:12:07.959594 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-10-09 10:12:07.959605 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-10-09 10:12:07.959616 | orchestrator | 2025-10-09 10:12:07.959627 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-10-09 10:12:07.959638 | orchestrator | 2025-10-09 10:12:07.959649 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-10-09 10:12:07.959660 | orchestrator | Thursday 09 October 2025 10:11:47 +0000 (0:00:00.854) 0:00:01.024 ****** 2025-10-09 10:12:07.959671 | orchestrator | ok: [testbed-manager] 2025-10-09 10:12:07.959684 | orchestrator | 2025-10-09 10:12:07.959694 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-10-09 10:12:07.959705 | orchestrator | 2025-10-09 10:12:07.959716 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-10-09 10:12:07.959727 | orchestrator | Thursday 09 October 2025 10:11:49 +0000 (0:00:02.593) 0:00:03.618 ****** 2025-10-09 10:12:07.959738 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:12:07.959749 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:12:07.959759 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:12:07.959770 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:12:07.959780 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:12:07.959791 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:12:07.959802 | orchestrator | 2025-10-09 10:12:07.959813 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-10-09 10:12:07.959823 | orchestrator | 2025-10-09 10:12:07.959834 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-10-09 10:12:07.959846 | orchestrator | Thursday 09 October 2025 10:11:51 +0000 (0:00:01.994) 0:00:05.613 ****** 2025-10-09 10:12:07.959874 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-10-09 10:12:07.959889 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-10-09 10:12:07.959902 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-10-09 10:12:07.959914 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-10-09 10:12:07.959926 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-10-09 10:12:07.959938 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-10-09 10:12:07.959951 | orchestrator | 2025-10-09 10:12:07.959964 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-10-09 10:12:07.959977 | orchestrator | Thursday 09 October 2025 10:11:53 +0000 (0:00:01.529) 0:00:07.142 ****** 2025-10-09 10:12:07.959989 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:12:07.960002 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:12:07.960015 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:12:07.960027 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:12:07.960039 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:12:07.960052 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:12:07.960064 | orchestrator | 2025-10-09 10:12:07.960076 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-10-09 10:12:07.960088 | orchestrator | Thursday 09 October 2025 10:11:56 +0000 (0:00:03.604) 0:00:10.746 ****** 2025-10-09 10:12:07.960101 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:12:07.960113 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:12:07.960125 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:12:07.960137 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:12:07.960150 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:12:07.960162 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:12:07.960183 | orchestrator | 2025-10-09 10:12:07.960197 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-10-09 10:12:07.960210 | orchestrator | 2025-10-09 10:12:07.960221 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-10-09 10:12:07.960232 | orchestrator | Thursday 09 October 2025 10:11:57 +0000 (0:00:00.798) 0:00:11.545 ****** 2025-10-09 10:12:07.960243 | orchestrator | changed: [testbed-manager] 2025-10-09 10:12:07.960254 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:12:07.960264 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:12:07.960276 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:12:07.960287 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:12:07.960298 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:12:07.960309 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:12:07.960319 | orchestrator | 2025-10-09 10:12:07.960330 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-10-09 10:12:07.960341 | orchestrator | Thursday 09 October 2025 10:11:59 +0000 (0:00:01.863) 0:00:13.408 ****** 2025-10-09 10:12:07.960352 | orchestrator | changed: [testbed-manager] 2025-10-09 10:12:07.960396 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:12:07.960408 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:12:07.960419 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:12:07.960430 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:12:07.960440 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:12:07.960469 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:12:07.960481 | orchestrator | 2025-10-09 10:12:07.960492 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-10-09 10:12:07.960503 | orchestrator | Thursday 09 October 2025 10:12:01 +0000 (0:00:01.663) 0:00:15.072 ****** 2025-10-09 10:12:07.960514 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:12:07.960524 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:12:07.960535 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:12:07.960546 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:12:07.960557 | orchestrator | ok: [testbed-manager] 2025-10-09 10:12:07.960568 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:12:07.960578 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:12:07.960589 | orchestrator | 2025-10-09 10:12:07.960600 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-10-09 10:12:07.960611 | orchestrator | Thursday 09 October 2025 10:12:02 +0000 (0:00:01.540) 0:00:16.612 ****** 2025-10-09 10:12:07.960621 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:12:07.960632 | orchestrator | changed: [testbed-manager] 2025-10-09 10:12:07.960643 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:12:07.960653 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:12:07.960664 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:12:07.960675 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:12:07.960685 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:12:07.960696 | orchestrator | 2025-10-09 10:12:07.960707 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-10-09 10:12:07.960718 | orchestrator | Thursday 09 October 2025 10:12:04 +0000 (0:00:01.843) 0:00:18.455 ****** 2025-10-09 10:12:07.960728 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:12:07.960739 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:12:07.960750 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:12:07.960760 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:12:07.960771 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:12:07.960782 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:12:07.960792 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:12:07.960803 | orchestrator | 2025-10-09 10:12:07.960814 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-10-09 10:12:07.960825 | orchestrator | 2025-10-09 10:12:07.960836 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-10-09 10:12:07.960847 | orchestrator | Thursday 09 October 2025 10:12:05 +0000 (0:00:00.662) 0:00:19.118 ****** 2025-10-09 10:12:07.960864 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:12:07.960875 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:12:07.960890 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:12:07.960902 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:12:07.960912 | orchestrator | ok: [testbed-manager] 2025-10-09 10:12:07.960924 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:12:07.960934 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:12:07.960945 | orchestrator | 2025-10-09 10:12:07.960956 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:12:07.960968 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:12:07.960980 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:12:07.960991 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:12:07.961002 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:12:07.961013 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:12:07.961024 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:12:07.961034 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:12:07.961045 | orchestrator | 2025-10-09 10:12:07.961056 | orchestrator | 2025-10-09 10:12:07.961067 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:12:07.961078 | orchestrator | Thursday 09 October 2025 10:12:07 +0000 (0:00:02.804) 0:00:21.923 ****** 2025-10-09 10:12:07.961088 | orchestrator | =============================================================================== 2025-10-09 10:12:07.961099 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.60s 2025-10-09 10:12:07.961110 | orchestrator | Install python3-docker -------------------------------------------------- 2.80s 2025-10-09 10:12:07.961120 | orchestrator | Apply netplan configuration --------------------------------------------- 2.59s 2025-10-09 10:12:07.961131 | orchestrator | Apply netplan configuration --------------------------------------------- 1.99s 2025-10-09 10:12:07.961142 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.86s 2025-10-09 10:12:07.961152 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.84s 2025-10-09 10:12:07.961163 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.66s 2025-10-09 10:12:07.961174 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.54s 2025-10-09 10:12:07.961184 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.53s 2025-10-09 10:12:07.961195 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.85s 2025-10-09 10:12:07.961206 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.80s 2025-10-09 10:12:07.961223 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.66s 2025-10-09 10:12:08.768781 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-10-09 10:12:20.935490 | orchestrator | 2025-10-09 10:12:20 | INFO  | Task 22a46c1f-c2ad-4a67-bbb3-5bc46315c4f4 (reboot) was prepared for execution. 2025-10-09 10:12:20.935603 | orchestrator | 2025-10-09 10:12:20 | INFO  | It takes a moment until task 22a46c1f-c2ad-4a67-bbb3-5bc46315c4f4 (reboot) has been started and output is visible here. 2025-10-09 10:12:31.594496 | orchestrator | 2025-10-09 10:12:31.594606 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-10-09 10:12:31.594619 | orchestrator | 2025-10-09 10:12:31.594629 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-10-09 10:12:31.594637 | orchestrator | Thursday 09 October 2025 10:12:25 +0000 (0:00:00.257) 0:00:00.258 ****** 2025-10-09 10:12:31.594645 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:12:31.594654 | orchestrator | 2025-10-09 10:12:31.594662 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-10-09 10:12:31.594670 | orchestrator | Thursday 09 October 2025 10:12:25 +0000 (0:00:00.112) 0:00:00.370 ****** 2025-10-09 10:12:31.594678 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:12:31.594686 | orchestrator | 2025-10-09 10:12:31.594694 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-10-09 10:12:31.594702 | orchestrator | Thursday 09 October 2025 10:12:26 +0000 (0:00:00.942) 0:00:01.312 ****** 2025-10-09 10:12:31.594710 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:12:31.594718 | orchestrator | 2025-10-09 10:12:31.594740 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-10-09 10:12:31.594748 | orchestrator | 2025-10-09 10:12:31.594757 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-10-09 10:12:31.594764 | orchestrator | Thursday 09 October 2025 10:12:26 +0000 (0:00:00.119) 0:00:01.432 ****** 2025-10-09 10:12:31.594772 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:12:31.594780 | orchestrator | 2025-10-09 10:12:31.594788 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-10-09 10:12:31.594796 | orchestrator | Thursday 09 October 2025 10:12:26 +0000 (0:00:00.127) 0:00:01.560 ****** 2025-10-09 10:12:31.594804 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:12:31.594811 | orchestrator | 2025-10-09 10:12:31.594820 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-10-09 10:12:31.594827 | orchestrator | Thursday 09 October 2025 10:12:27 +0000 (0:00:00.713) 0:00:02.274 ****** 2025-10-09 10:12:31.594835 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:12:31.594843 | orchestrator | 2025-10-09 10:12:31.594854 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-10-09 10:12:31.594862 | orchestrator | 2025-10-09 10:12:31.594870 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-10-09 10:12:31.594878 | orchestrator | Thursday 09 October 2025 10:12:27 +0000 (0:00:00.127) 0:00:02.401 ****** 2025-10-09 10:12:31.594885 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:12:31.594893 | orchestrator | 2025-10-09 10:12:31.594901 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-10-09 10:12:31.594909 | orchestrator | Thursday 09 October 2025 10:12:27 +0000 (0:00:00.236) 0:00:02.637 ****** 2025-10-09 10:12:31.594917 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:12:31.594924 | orchestrator | 2025-10-09 10:12:31.594932 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-10-09 10:12:31.594940 | orchestrator | Thursday 09 October 2025 10:12:28 +0000 (0:00:00.658) 0:00:03.296 ****** 2025-10-09 10:12:31.594948 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:12:31.594955 | orchestrator | 2025-10-09 10:12:31.594963 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-10-09 10:12:31.594971 | orchestrator | 2025-10-09 10:12:31.594979 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-10-09 10:12:31.594987 | orchestrator | Thursday 09 October 2025 10:12:28 +0000 (0:00:00.112) 0:00:03.409 ****** 2025-10-09 10:12:31.594994 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:12:31.595002 | orchestrator | 2025-10-09 10:12:31.595011 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-10-09 10:12:31.595020 | orchestrator | Thursday 09 October 2025 10:12:28 +0000 (0:00:00.092) 0:00:03.501 ****** 2025-10-09 10:12:31.595030 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:12:31.595039 | orchestrator | 2025-10-09 10:12:31.595048 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-10-09 10:12:31.595062 | orchestrator | Thursday 09 October 2025 10:12:29 +0000 (0:00:00.691) 0:00:04.193 ****** 2025-10-09 10:12:31.595072 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:12:31.595080 | orchestrator | 2025-10-09 10:12:31.595089 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-10-09 10:12:31.595098 | orchestrator | 2025-10-09 10:12:31.595106 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-10-09 10:12:31.595115 | orchestrator | Thursday 09 October 2025 10:12:29 +0000 (0:00:00.155) 0:00:04.349 ****** 2025-10-09 10:12:31.595124 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:12:31.595132 | orchestrator | 2025-10-09 10:12:31.595141 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-10-09 10:12:31.595150 | orchestrator | Thursday 09 October 2025 10:12:29 +0000 (0:00:00.111) 0:00:04.460 ****** 2025-10-09 10:12:31.595158 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:12:31.595167 | orchestrator | 2025-10-09 10:12:31.595176 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-10-09 10:12:31.595185 | orchestrator | Thursday 09 October 2025 10:12:30 +0000 (0:00:00.708) 0:00:05.169 ****** 2025-10-09 10:12:31.595193 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:12:31.595202 | orchestrator | 2025-10-09 10:12:31.595211 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-10-09 10:12:31.595220 | orchestrator | 2025-10-09 10:12:31.595229 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-10-09 10:12:31.595238 | orchestrator | Thursday 09 October 2025 10:12:30 +0000 (0:00:00.131) 0:00:05.301 ****** 2025-10-09 10:12:31.595247 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:12:31.595255 | orchestrator | 2025-10-09 10:12:31.595264 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-10-09 10:12:31.595273 | orchestrator | Thursday 09 October 2025 10:12:30 +0000 (0:00:00.121) 0:00:05.422 ****** 2025-10-09 10:12:31.595281 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:12:31.595290 | orchestrator | 2025-10-09 10:12:31.595299 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-10-09 10:12:31.595308 | orchestrator | Thursday 09 October 2025 10:12:31 +0000 (0:00:00.674) 0:00:06.097 ****** 2025-10-09 10:12:31.595331 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:12:31.595342 | orchestrator | 2025-10-09 10:12:31.595351 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:12:31.595382 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:12:31.595392 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:12:31.595400 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:12:31.595408 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:12:31.595415 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:12:31.595423 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:12:31.595431 | orchestrator | 2025-10-09 10:12:31.595439 | orchestrator | 2025-10-09 10:12:31.595447 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:12:31.595455 | orchestrator | Thursday 09 October 2025 10:12:31 +0000 (0:00:00.037) 0:00:06.135 ****** 2025-10-09 10:12:31.595462 | orchestrator | =============================================================================== 2025-10-09 10:12:31.595476 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.39s 2025-10-09 10:12:31.595487 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.80s 2025-10-09 10:12:31.595495 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.69s 2025-10-09 10:12:31.916708 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-10-09 10:12:44.023224 | orchestrator | 2025-10-09 10:12:44 | INFO  | Task 40c7c4e8-ade4-47a0-ae1c-884b7c2df81f (wait-for-connection) was prepared for execution. 2025-10-09 10:12:44.023335 | orchestrator | 2025-10-09 10:12:44 | INFO  | It takes a moment until task 40c7c4e8-ade4-47a0-ae1c-884b7c2df81f (wait-for-connection) has been started and output is visible here. 2025-10-09 10:13:00.725199 | orchestrator | 2025-10-09 10:13:00.725322 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-10-09 10:13:00.725340 | orchestrator | 2025-10-09 10:13:00.725402 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-10-09 10:13:00.725417 | orchestrator | Thursday 09 October 2025 10:12:48 +0000 (0:00:00.268) 0:00:00.268 ****** 2025-10-09 10:13:00.725428 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:13:00.725440 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:13:00.725451 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:13:00.725462 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:13:00.725473 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:13:00.725483 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:13:00.725494 | orchestrator | 2025-10-09 10:13:00.725505 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:13:00.725516 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:13:00.725529 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:13:00.725540 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:13:00.725551 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:13:00.725562 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:13:00.725573 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:13:00.725583 | orchestrator | 2025-10-09 10:13:00.725594 | orchestrator | 2025-10-09 10:13:00.725605 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:13:00.725616 | orchestrator | Thursday 09 October 2025 10:13:00 +0000 (0:00:11.622) 0:00:11.891 ****** 2025-10-09 10:13:00.725626 | orchestrator | =============================================================================== 2025-10-09 10:13:00.725637 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.62s 2025-10-09 10:13:01.059472 | orchestrator | + osism apply hddtemp 2025-10-09 10:13:13.192852 | orchestrator | 2025-10-09 10:13:13 | INFO  | Task 548b9012-0991-478b-94e5-3befa35fbf09 (hddtemp) was prepared for execution. 2025-10-09 10:13:13.192968 | orchestrator | 2025-10-09 10:13:13 | INFO  | It takes a moment until task 548b9012-0991-478b-94e5-3befa35fbf09 (hddtemp) has been started and output is visible here. 2025-10-09 10:13:41.886379 | orchestrator | 2025-10-09 10:13:41.886487 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-10-09 10:13:41.886504 | orchestrator | 2025-10-09 10:13:41.886516 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-10-09 10:13:41.886528 | orchestrator | Thursday 09 October 2025 10:13:17 +0000 (0:00:00.328) 0:00:00.328 ****** 2025-10-09 10:13:41.886539 | orchestrator | ok: [testbed-manager] 2025-10-09 10:13:41.886576 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:13:41.886587 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:13:41.886598 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:13:41.886608 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:13:41.886619 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:13:41.886629 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:13:41.886640 | orchestrator | 2025-10-09 10:13:41.886650 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-10-09 10:13:41.886661 | orchestrator | Thursday 09 October 2025 10:13:18 +0000 (0:00:00.787) 0:00:01.115 ****** 2025-10-09 10:13:41.886674 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:13:41.886687 | orchestrator | 2025-10-09 10:13:41.886698 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-10-09 10:13:41.886708 | orchestrator | Thursday 09 October 2025 10:13:19 +0000 (0:00:01.302) 0:00:02.418 ****** 2025-10-09 10:13:41.886719 | orchestrator | ok: [testbed-manager] 2025-10-09 10:13:41.886729 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:13:41.886739 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:13:41.886750 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:13:41.886760 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:13:41.886770 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:13:41.886781 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:13:41.886791 | orchestrator | 2025-10-09 10:13:41.886802 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-10-09 10:13:41.886813 | orchestrator | Thursday 09 October 2025 10:13:21 +0000 (0:00:02.063) 0:00:04.481 ****** 2025-10-09 10:13:41.886823 | orchestrator | changed: [testbed-manager] 2025-10-09 10:13:41.886835 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:13:41.886861 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:13:41.886872 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:13:41.886883 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:13:41.886895 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:13:41.886907 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:13:41.886919 | orchestrator | 2025-10-09 10:13:41.886932 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-10-09 10:13:41.886944 | orchestrator | Thursday 09 October 2025 10:13:23 +0000 (0:00:01.223) 0:00:05.705 ****** 2025-10-09 10:13:41.886956 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:13:41.886969 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:13:41.886981 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:13:41.886993 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:13:41.887005 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:13:41.887017 | orchestrator | ok: [testbed-manager] 2025-10-09 10:13:41.887028 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:13:41.887040 | orchestrator | 2025-10-09 10:13:41.887052 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-10-09 10:13:41.887065 | orchestrator | Thursday 09 October 2025 10:13:24 +0000 (0:00:01.150) 0:00:06.855 ****** 2025-10-09 10:13:41.887076 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:13:41.887088 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:13:41.887100 | orchestrator | changed: [testbed-manager] 2025-10-09 10:13:41.887112 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:13:41.887124 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:13:41.887136 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:13:41.887148 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:13:41.887160 | orchestrator | 2025-10-09 10:13:41.887173 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-10-09 10:13:41.887185 | orchestrator | Thursday 09 October 2025 10:13:25 +0000 (0:00:00.905) 0:00:07.761 ****** 2025-10-09 10:13:41.887197 | orchestrator | changed: [testbed-manager] 2025-10-09 10:13:41.887208 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:13:41.887221 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:13:41.887241 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:13:41.887253 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:13:41.887263 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:13:41.887273 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:13:41.887284 | orchestrator | 2025-10-09 10:13:41.887295 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-10-09 10:13:41.887305 | orchestrator | Thursday 09 October 2025 10:13:38 +0000 (0:00:13.182) 0:00:20.944 ****** 2025-10-09 10:13:41.887317 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:13:41.887328 | orchestrator | 2025-10-09 10:13:41.887339 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-10-09 10:13:41.887368 | orchestrator | Thursday 09 October 2025 10:13:39 +0000 (0:00:01.266) 0:00:22.211 ****** 2025-10-09 10:13:41.887379 | orchestrator | changed: [testbed-manager] 2025-10-09 10:13:41.887390 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:13:41.887400 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:13:41.887410 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:13:41.887421 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:13:41.887431 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:13:41.887442 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:13:41.887452 | orchestrator | 2025-10-09 10:13:41.887463 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:13:41.887474 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:13:41.887503 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:13:41.887516 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:13:41.887527 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:13:41.887538 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:13:41.887548 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:13:41.887559 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:13:41.887570 | orchestrator | 2025-10-09 10:13:41.887581 | orchestrator | 2025-10-09 10:13:41.887592 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:13:41.887603 | orchestrator | Thursday 09 October 2025 10:13:41 +0000 (0:00:01.890) 0:00:24.102 ****** 2025-10-09 10:13:41.887614 | orchestrator | =============================================================================== 2025-10-09 10:13:41.887625 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.18s 2025-10-09 10:13:41.887635 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.06s 2025-10-09 10:13:41.887646 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.89s 2025-10-09 10:13:41.887657 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.30s 2025-10-09 10:13:41.887673 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.27s 2025-10-09 10:13:41.887684 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.22s 2025-10-09 10:13:41.887695 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.15s 2025-10-09 10:13:41.887712 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.91s 2025-10-09 10:13:41.887723 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.79s 2025-10-09 10:13:42.221056 | orchestrator | ++ semver latest 7.1.1 2025-10-09 10:13:42.289202 | orchestrator | + [[ -1 -ge 0 ]] 2025-10-09 10:13:42.289238 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-10-09 10:13:42.289251 | orchestrator | + sudo systemctl restart manager.service 2025-10-09 10:13:57.158475 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-10-09 10:13:57.158583 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-10-09 10:13:57.158599 | orchestrator | + local max_attempts=60 2025-10-09 10:13:57.158614 | orchestrator | + local name=ceph-ansible 2025-10-09 10:13:57.158625 | orchestrator | + local attempt_num=1 2025-10-09 10:13:57.158636 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:13:57.189455 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-10-09 10:13:57.189620 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:13:57.189641 | orchestrator | + sleep 5 2025-10-09 10:14:02.195813 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:14:02.218585 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-10-09 10:14:02.218629 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:14:02.218643 | orchestrator | + sleep 5 2025-10-09 10:14:07.221726 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:14:07.266778 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-10-09 10:14:07.266833 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:14:07.266845 | orchestrator | + sleep 5 2025-10-09 10:14:12.273801 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:14:12.438918 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-10-09 10:14:12.438976 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:14:12.438989 | orchestrator | + sleep 5 2025-10-09 10:14:17.310720 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:14:17.345033 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-10-09 10:14:17.345102 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:14:17.345115 | orchestrator | + sleep 5 2025-10-09 10:14:22.349566 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:14:22.384872 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-10-09 10:14:22.384938 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:14:22.384952 | orchestrator | + sleep 5 2025-10-09 10:14:27.390299 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:14:27.430681 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-10-09 10:14:27.430724 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:14:27.430738 | orchestrator | + sleep 5 2025-10-09 10:14:32.435290 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:14:32.539721 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-10-09 10:14:32.539810 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:14:32.539826 | orchestrator | + sleep 5 2025-10-09 10:14:37.544751 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:14:37.578543 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-10-09 10:14:37.578612 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:14:37.578628 | orchestrator | + sleep 5 2025-10-09 10:14:42.580954 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:14:42.616874 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-10-09 10:14:42.616908 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:14:42.616916 | orchestrator | + sleep 5 2025-10-09 10:14:47.620441 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:14:47.651520 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-10-09 10:14:47.651577 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:14:47.651590 | orchestrator | + sleep 5 2025-10-09 10:14:52.656804 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:14:52.693132 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-10-09 10:14:52.693217 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:14:52.693231 | orchestrator | + sleep 5 2025-10-09 10:14:57.700278 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:14:57.744864 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-10-09 10:14:57.744923 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-10-09 10:14:57.744937 | orchestrator | + sleep 5 2025-10-09 10:15:02.750649 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-10-09 10:15:02.791787 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-10-09 10:15:02.791841 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-10-09 10:15:02.791856 | orchestrator | + local max_attempts=60 2025-10-09 10:15:02.791870 | orchestrator | + local name=kolla-ansible 2025-10-09 10:15:02.791882 | orchestrator | + local attempt_num=1 2025-10-09 10:15:02.792473 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-10-09 10:15:02.830155 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-10-09 10:15:02.830183 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-10-09 10:15:02.830194 | orchestrator | + local max_attempts=60 2025-10-09 10:15:02.830206 | orchestrator | + local name=osism-ansible 2025-10-09 10:15:02.830217 | orchestrator | + local attempt_num=1 2025-10-09 10:15:02.831283 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-10-09 10:15:02.874291 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-10-09 10:15:02.874844 | orchestrator | + [[ true == \t\r\u\e ]] 2025-10-09 10:15:02.874870 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-10-09 10:15:03.044065 | orchestrator | ARA in ceph-ansible already disabled. 2025-10-09 10:15:03.191490 | orchestrator | ARA in kolla-ansible already disabled. 2025-10-09 10:15:03.374760 | orchestrator | ARA in osism-ansible already disabled. 2025-10-09 10:15:03.537820 | orchestrator | ARA in osism-kubernetes already disabled. 2025-10-09 10:15:03.539553 | orchestrator | + osism apply gather-facts 2025-10-09 10:15:15.782729 | orchestrator | 2025-10-09 10:15:15 | INFO  | Task 0af92089-a263-4988-b0f5-3bd011bb63f9 (gather-facts) was prepared for execution. 2025-10-09 10:15:15.782839 | orchestrator | 2025-10-09 10:15:15 | INFO  | It takes a moment until task 0af92089-a263-4988-b0f5-3bd011bb63f9 (gather-facts) has been started and output is visible here. 2025-10-09 10:15:29.958094 | orchestrator | 2025-10-09 10:15:29.958221 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-10-09 10:15:29.958252 | orchestrator | 2025-10-09 10:15:29.958965 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-10-09 10:15:29.958987 | orchestrator | Thursday 09 October 2025 10:15:20 +0000 (0:00:00.228) 0:00:00.228 ****** 2025-10-09 10:15:29.958999 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:15:29.959011 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:15:29.959042 | orchestrator | ok: [testbed-manager] 2025-10-09 10:15:29.959054 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:15:29.959064 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:15:29.959075 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:15:29.959086 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:15:29.959097 | orchestrator | 2025-10-09 10:15:29.959108 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-10-09 10:15:29.959119 | orchestrator | 2025-10-09 10:15:29.959130 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-10-09 10:15:29.959141 | orchestrator | Thursday 09 October 2025 10:15:28 +0000 (0:00:08.643) 0:00:08.872 ****** 2025-10-09 10:15:29.959153 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:15:29.959166 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:15:29.959177 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:15:29.959188 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:15:29.959199 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:15:29.959210 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:15:29.959220 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:15:29.959231 | orchestrator | 2025-10-09 10:15:29.959242 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:15:29.959254 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:15:29.959266 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:15:29.959301 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:15:29.959313 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:15:29.959323 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:15:29.959358 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:15:29.959369 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:15:29.959380 | orchestrator | 2025-10-09 10:15:29.959391 | orchestrator | 2025-10-09 10:15:29.959402 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:15:29.959413 | orchestrator | Thursday 09 October 2025 10:15:29 +0000 (0:00:00.596) 0:00:09.468 ****** 2025-10-09 10:15:29.959424 | orchestrator | =============================================================================== 2025-10-09 10:15:29.959435 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.64s 2025-10-09 10:15:29.959446 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2025-10-09 10:15:30.328672 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-10-09 10:15:30.344654 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-10-09 10:15:30.363656 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-10-09 10:15:30.378194 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-10-09 10:15:30.392717 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-10-09 10:15:30.412150 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-10-09 10:15:30.431191 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-10-09 10:15:30.452198 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-10-09 10:15:30.472932 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-10-09 10:15:30.491482 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-10-09 10:15:30.505955 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-10-09 10:15:30.524780 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-10-09 10:15:30.550305 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-10-09 10:15:30.569124 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-10-09 10:15:30.594187 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-10-09 10:15:30.614791 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-10-09 10:15:30.636389 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-10-09 10:15:30.657977 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-10-09 10:15:30.680948 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-10-09 10:15:30.702009 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-10-09 10:15:30.722154 | orchestrator | + [[ false == \t\r\u\e ]] 2025-10-09 10:15:31.079585 | orchestrator | ok: Runtime: 0:24:25.630068 2025-10-09 10:15:31.171924 | 2025-10-09 10:15:31.172044 | TASK [Deploy services] 2025-10-09 10:15:31.704789 | orchestrator | skipping: Conditional result was False 2025-10-09 10:15:31.723302 | 2025-10-09 10:15:31.723483 | TASK [Deploy in a nutshell] 2025-10-09 10:15:32.429805 | orchestrator | + set -e 2025-10-09 10:15:32.429934 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-10-09 10:15:32.429945 | orchestrator | ++ export INTERACTIVE=false 2025-10-09 10:15:32.429953 | orchestrator | ++ INTERACTIVE=false 2025-10-09 10:15:32.429959 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-10-09 10:15:32.429963 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-10-09 10:15:32.429970 | orchestrator | + source /opt/manager-vars.sh 2025-10-09 10:15:32.429990 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-10-09 10:15:32.430002 | orchestrator | ++ NUMBER_OF_NODES=6 2025-10-09 10:15:32.430007 | orchestrator | ++ export CEPH_VERSION=reef 2025-10-09 10:15:32.430028 | orchestrator | ++ CEPH_VERSION=reef 2025-10-09 10:15:32.430033 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-10-09 10:15:32.430041 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-10-09 10:15:32.430045 | orchestrator | ++ export MANAGER_VERSION=latest 2025-10-09 10:15:32.430057 | orchestrator | 2025-10-09 10:15:32.430061 | orchestrator | # PULL IMAGES 2025-10-09 10:15:32.430065 | orchestrator | 2025-10-09 10:15:32.430072 | orchestrator | ++ MANAGER_VERSION=latest 2025-10-09 10:15:32.430076 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-10-09 10:15:32.430081 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-10-09 10:15:32.430084 | orchestrator | ++ export ARA=false 2025-10-09 10:15:32.430088 | orchestrator | ++ ARA=false 2025-10-09 10:15:32.430092 | orchestrator | ++ export DEPLOY_MODE=manager 2025-10-09 10:15:32.430096 | orchestrator | ++ DEPLOY_MODE=manager 2025-10-09 10:15:32.430100 | orchestrator | ++ export TEMPEST=false 2025-10-09 10:15:32.430103 | orchestrator | ++ TEMPEST=false 2025-10-09 10:15:32.430107 | orchestrator | ++ export IS_ZUUL=true 2025-10-09 10:15:32.430111 | orchestrator | ++ IS_ZUUL=true 2025-10-09 10:15:32.430115 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.25 2025-10-09 10:15:32.430119 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.25 2025-10-09 10:15:32.430123 | orchestrator | ++ export EXTERNAL_API=false 2025-10-09 10:15:32.430126 | orchestrator | ++ EXTERNAL_API=false 2025-10-09 10:15:32.430130 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-10-09 10:15:32.430134 | orchestrator | ++ IMAGE_USER=ubuntu 2025-10-09 10:15:32.430138 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-10-09 10:15:32.430142 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-10-09 10:15:32.430145 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-10-09 10:15:32.430152 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-10-09 10:15:32.430157 | orchestrator | + echo 2025-10-09 10:15:32.430160 | orchestrator | + echo '# PULL IMAGES' 2025-10-09 10:15:32.430164 | orchestrator | + echo 2025-10-09 10:15:32.431014 | orchestrator | ++ semver latest 7.0.0 2025-10-09 10:15:32.503607 | orchestrator | + [[ -1 -ge 0 ]] 2025-10-09 10:15:32.503646 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-10-09 10:15:32.503653 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-10-09 10:15:34.530972 | orchestrator | 2025-10-09 10:15:34 | INFO  | Trying to run play pull-images in environment custom 2025-10-09 10:15:44.624026 | orchestrator | 2025-10-09 10:15:44 | INFO  | Task 2eaae7b8-498b-4324-9a51-8d6fe56abf8f (pull-images) was prepared for execution. 2025-10-09 10:15:44.624170 | orchestrator | 2025-10-09 10:15:44 | INFO  | Task 2eaae7b8-498b-4324-9a51-8d6fe56abf8f is running in background. No more output. Check ARA for logs. 2025-10-09 10:15:47.218121 | orchestrator | 2025-10-09 10:15:47 | INFO  | Trying to run play wipe-partitions in environment custom 2025-10-09 10:15:57.389098 | orchestrator | 2025-10-09 10:15:57 | INFO  | Task 63e9afa9-02a4-4a4b-915f-fa09796c788c (wipe-partitions) was prepared for execution. 2025-10-09 10:15:57.389223 | orchestrator | 2025-10-09 10:15:57 | INFO  | It takes a moment until task 63e9afa9-02a4-4a4b-915f-fa09796c788c (wipe-partitions) has been started and output is visible here. 2025-10-09 10:16:10.326817 | orchestrator | 2025-10-09 10:16:10.326932 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-10-09 10:16:10.326947 | orchestrator | 2025-10-09 10:16:10.326958 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-10-09 10:16:10.326972 | orchestrator | Thursday 09 October 2025 10:16:01 +0000 (0:00:00.163) 0:00:00.163 ****** 2025-10-09 10:16:10.326984 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:16:10.326995 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:16:10.327005 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:16:10.327016 | orchestrator | 2025-10-09 10:16:10.327026 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-10-09 10:16:10.327064 | orchestrator | Thursday 09 October 2025 10:16:02 +0000 (0:00:00.612) 0:00:00.776 ****** 2025-10-09 10:16:10.327075 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:10.327085 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:16:10.327100 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:16:10.327110 | orchestrator | 2025-10-09 10:16:10.327120 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-10-09 10:16:10.327129 | orchestrator | Thursday 09 October 2025 10:16:02 +0000 (0:00:00.389) 0:00:01.166 ****** 2025-10-09 10:16:10.327139 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:16:10.327150 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:16:10.327160 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:16:10.327169 | orchestrator | 2025-10-09 10:16:10.327179 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-10-09 10:16:10.327189 | orchestrator | Thursday 09 October 2025 10:16:03 +0000 (0:00:00.627) 0:00:01.793 ****** 2025-10-09 10:16:10.327199 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:10.327209 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:16:10.327218 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:16:10.327228 | orchestrator | 2025-10-09 10:16:10.327238 | orchestrator | TASK [Check device availability] *********************************************** 2025-10-09 10:16:10.327247 | orchestrator | Thursday 09 October 2025 10:16:03 +0000 (0:00:00.251) 0:00:02.045 ****** 2025-10-09 10:16:10.327257 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-10-09 10:16:10.327271 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-10-09 10:16:10.327281 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-10-09 10:16:10.327290 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-10-09 10:16:10.327300 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-10-09 10:16:10.327310 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-10-09 10:16:10.327361 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-10-09 10:16:10.327374 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-10-09 10:16:10.327385 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-10-09 10:16:10.327396 | orchestrator | 2025-10-09 10:16:10.327407 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-10-09 10:16:10.327418 | orchestrator | Thursday 09 October 2025 10:16:04 +0000 (0:00:01.152) 0:00:03.197 ****** 2025-10-09 10:16:10.327429 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-10-09 10:16:10.327440 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-10-09 10:16:10.327451 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-10-09 10:16:10.327462 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-10-09 10:16:10.327472 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-10-09 10:16:10.327483 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-10-09 10:16:10.327494 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-10-09 10:16:10.327504 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-10-09 10:16:10.327515 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-10-09 10:16:10.327526 | orchestrator | 2025-10-09 10:16:10.327537 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-10-09 10:16:10.327548 | orchestrator | Thursday 09 October 2025 10:16:06 +0000 (0:00:01.583) 0:00:04.780 ****** 2025-10-09 10:16:10.327558 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-10-09 10:16:10.327569 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-10-09 10:16:10.327580 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-10-09 10:16:10.327591 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-10-09 10:16:10.327602 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-10-09 10:16:10.327620 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-10-09 10:16:10.327632 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-10-09 10:16:10.327652 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-10-09 10:16:10.327663 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-10-09 10:16:10.327674 | orchestrator | 2025-10-09 10:16:10.327686 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-10-09 10:16:10.327697 | orchestrator | Thursday 09 October 2025 10:16:08 +0000 (0:00:02.070) 0:00:06.851 ****** 2025-10-09 10:16:10.327707 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:16:10.327717 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:16:10.327726 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:16:10.327736 | orchestrator | 2025-10-09 10:16:10.327746 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-10-09 10:16:10.327755 | orchestrator | Thursday 09 October 2025 10:16:09 +0000 (0:00:00.626) 0:00:07.478 ****** 2025-10-09 10:16:10.327765 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:16:10.327775 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:16:10.327785 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:16:10.327794 | orchestrator | 2025-10-09 10:16:10.327804 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:16:10.327816 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:16:10.327827 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:16:10.327853 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:16:10.327864 | orchestrator | 2025-10-09 10:16:10.327874 | orchestrator | 2025-10-09 10:16:10.327884 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:16:10.327893 | orchestrator | Thursday 09 October 2025 10:16:09 +0000 (0:00:00.675) 0:00:08.153 ****** 2025-10-09 10:16:10.327903 | orchestrator | =============================================================================== 2025-10-09 10:16:10.327913 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.07s 2025-10-09 10:16:10.327923 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.58s 2025-10-09 10:16:10.327932 | orchestrator | Check device availability ----------------------------------------------- 1.15s 2025-10-09 10:16:10.327942 | orchestrator | Request device events from the kernel ----------------------------------- 0.68s 2025-10-09 10:16:10.327952 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.63s 2025-10-09 10:16:10.327961 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2025-10-09 10:16:10.327971 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.61s 2025-10-09 10:16:10.327981 | orchestrator | Remove all rook related logical devices --------------------------------- 0.39s 2025-10-09 10:16:10.327991 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2025-10-09 10:16:22.725889 | orchestrator | 2025-10-09 10:16:22 | INFO  | Task 6f713b2a-c429-4876-ac1f-ac64ef381008 (facts) was prepared for execution. 2025-10-09 10:16:22.726000 | orchestrator | 2025-10-09 10:16:22 | INFO  | It takes a moment until task 6f713b2a-c429-4876-ac1f-ac64ef381008 (facts) has been started and output is visible here. 2025-10-09 10:16:35.927530 | orchestrator | 2025-10-09 10:16:35.927633 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-10-09 10:16:35.927645 | orchestrator | 2025-10-09 10:16:35.927654 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-10-09 10:16:35.927663 | orchestrator | Thursday 09 October 2025 10:16:27 +0000 (0:00:00.282) 0:00:00.282 ****** 2025-10-09 10:16:35.927672 | orchestrator | ok: [testbed-manager] 2025-10-09 10:16:35.927681 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:16:35.927689 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:16:35.927720 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:16:35.927728 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:16:35.927736 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:16:35.927744 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:16:35.927752 | orchestrator | 2025-10-09 10:16:35.927761 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-10-09 10:16:35.927769 | orchestrator | Thursday 09 October 2025 10:16:28 +0000 (0:00:01.101) 0:00:01.384 ****** 2025-10-09 10:16:35.927778 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:16:35.927786 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:16:35.927794 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:16:35.927802 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:16:35.927810 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:35.927818 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:16:35.927826 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:16:35.927834 | orchestrator | 2025-10-09 10:16:35.927842 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-10-09 10:16:35.927850 | orchestrator | 2025-10-09 10:16:35.927858 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-10-09 10:16:35.927866 | orchestrator | Thursday 09 October 2025 10:16:29 +0000 (0:00:01.340) 0:00:02.724 ****** 2025-10-09 10:16:35.927874 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:16:35.927882 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:16:35.927891 | orchestrator | ok: [testbed-manager] 2025-10-09 10:16:35.927899 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:16:35.927907 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:16:35.927915 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:16:35.927923 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:16:35.927931 | orchestrator | 2025-10-09 10:16:35.927939 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-10-09 10:16:35.927947 | orchestrator | 2025-10-09 10:16:35.927955 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-10-09 10:16:35.927976 | orchestrator | Thursday 09 October 2025 10:16:34 +0000 (0:00:04.944) 0:00:07.668 ****** 2025-10-09 10:16:35.927985 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:16:35.927993 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:16:35.928000 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:16:35.928008 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:16:35.928016 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:35.928024 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:16:35.928032 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:16:35.928040 | orchestrator | 2025-10-09 10:16:35.928048 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:16:35.928056 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:16:35.928066 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:16:35.928073 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:16:35.928082 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:16:35.928091 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:16:35.928100 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:16:35.928109 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:16:35.928118 | orchestrator | 2025-10-09 10:16:35.928135 | orchestrator | 2025-10-09 10:16:35.928144 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:16:35.928153 | orchestrator | Thursday 09 October 2025 10:16:35 +0000 (0:00:00.672) 0:00:08.341 ****** 2025-10-09 10:16:35.928162 | orchestrator | =============================================================================== 2025-10-09 10:16:35.928171 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.94s 2025-10-09 10:16:35.928180 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.34s 2025-10-09 10:16:35.928188 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.10s 2025-10-09 10:16:35.928197 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.67s 2025-10-09 10:16:38.472834 | orchestrator | 2025-10-09 10:16:38 | INFO  | Task 87fef764-65fe-4c2d-956e-393cf247e912 (ceph-configure-lvm-volumes) was prepared for execution. 2025-10-09 10:16:38.472930 | orchestrator | 2025-10-09 10:16:38 | INFO  | It takes a moment until task 87fef764-65fe-4c2d-956e-393cf247e912 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-10-09 10:16:50.952295 | orchestrator | 2025-10-09 10:16:50.952425 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-10-09 10:16:50.952440 | orchestrator | 2025-10-09 10:16:50.952450 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-10-09 10:16:50.952462 | orchestrator | Thursday 09 October 2025 10:16:43 +0000 (0:00:00.334) 0:00:00.334 ****** 2025-10-09 10:16:50.952472 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:16:50.952481 | orchestrator | 2025-10-09 10:16:50.952490 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-10-09 10:16:50.952499 | orchestrator | Thursday 09 October 2025 10:16:43 +0000 (0:00:00.258) 0:00:00.592 ****** 2025-10-09 10:16:50.952508 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:16:50.952518 | orchestrator | 2025-10-09 10:16:50.952527 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:50.952536 | orchestrator | Thursday 09 October 2025 10:16:43 +0000 (0:00:00.244) 0:00:00.836 ****** 2025-10-09 10:16:50.952545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-10-09 10:16:50.952554 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-10-09 10:16:50.952563 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-10-09 10:16:50.952572 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-10-09 10:16:50.952581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-10-09 10:16:50.952590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-10-09 10:16:50.952599 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-10-09 10:16:50.952607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-10-09 10:16:50.952616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-10-09 10:16:50.952625 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-10-09 10:16:50.952634 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-10-09 10:16:50.952651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-10-09 10:16:50.952660 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-10-09 10:16:50.952669 | orchestrator | 2025-10-09 10:16:50.952678 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:50.952686 | orchestrator | Thursday 09 October 2025 10:16:44 +0000 (0:00:00.492) 0:00:01.329 ****** 2025-10-09 10:16:50.952695 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:50.952726 | orchestrator | 2025-10-09 10:16:50.952735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:50.952744 | orchestrator | Thursday 09 October 2025 10:16:44 +0000 (0:00:00.205) 0:00:01.535 ****** 2025-10-09 10:16:50.952753 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:50.952762 | orchestrator | 2025-10-09 10:16:50.952770 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:50.952791 | orchestrator | Thursday 09 October 2025 10:16:44 +0000 (0:00:00.207) 0:00:01.742 ****** 2025-10-09 10:16:50.952800 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:50.952809 | orchestrator | 2025-10-09 10:16:50.952819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:50.952829 | orchestrator | Thursday 09 October 2025 10:16:44 +0000 (0:00:00.229) 0:00:01.972 ****** 2025-10-09 10:16:50.952839 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:50.952852 | orchestrator | 2025-10-09 10:16:50.952862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:50.952872 | orchestrator | Thursday 09 October 2025 10:16:44 +0000 (0:00:00.212) 0:00:02.184 ****** 2025-10-09 10:16:50.952881 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:50.952891 | orchestrator | 2025-10-09 10:16:50.952902 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:50.952912 | orchestrator | Thursday 09 October 2025 10:16:45 +0000 (0:00:00.210) 0:00:02.395 ****** 2025-10-09 10:16:50.952921 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:50.952931 | orchestrator | 2025-10-09 10:16:50.952941 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:50.952951 | orchestrator | Thursday 09 October 2025 10:16:45 +0000 (0:00:00.228) 0:00:02.623 ****** 2025-10-09 10:16:50.952961 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:50.952970 | orchestrator | 2025-10-09 10:16:50.952980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:50.952990 | orchestrator | Thursday 09 October 2025 10:16:45 +0000 (0:00:00.215) 0:00:02.838 ****** 2025-10-09 10:16:50.953000 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:50.953010 | orchestrator | 2025-10-09 10:16:50.953020 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:50.953030 | orchestrator | Thursday 09 October 2025 10:16:45 +0000 (0:00:00.220) 0:00:03.059 ****** 2025-10-09 10:16:50.953040 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843) 2025-10-09 10:16:50.953051 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843) 2025-10-09 10:16:50.953061 | orchestrator | 2025-10-09 10:16:50.953071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:50.953081 | orchestrator | Thursday 09 October 2025 10:16:46 +0000 (0:00:00.427) 0:00:03.486 ****** 2025-10-09 10:16:50.953106 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_919b2ed4-de3e-4423-bde9-ac7f73558c8d) 2025-10-09 10:16:50.953117 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_919b2ed4-de3e-4423-bde9-ac7f73558c8d) 2025-10-09 10:16:50.953126 | orchestrator | 2025-10-09 10:16:50.953136 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:50.953146 | orchestrator | Thursday 09 October 2025 10:16:46 +0000 (0:00:00.653) 0:00:04.140 ****** 2025-10-09 10:16:50.953156 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ea7d1eca-dc5e-463e-aff8-492469dc7c84) 2025-10-09 10:16:50.953166 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ea7d1eca-dc5e-463e-aff8-492469dc7c84) 2025-10-09 10:16:50.953176 | orchestrator | 2025-10-09 10:16:50.953185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:50.953195 | orchestrator | Thursday 09 October 2025 10:16:47 +0000 (0:00:00.679) 0:00:04.819 ****** 2025-10-09 10:16:50.953205 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2df43997-ce38-41a3-953f-7189c0799c6e) 2025-10-09 10:16:50.953220 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2df43997-ce38-41a3-953f-7189c0799c6e) 2025-10-09 10:16:50.953229 | orchestrator | 2025-10-09 10:16:50.953237 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:50.953246 | orchestrator | Thursday 09 October 2025 10:16:48 +0000 (0:00:00.949) 0:00:05.769 ****** 2025-10-09 10:16:50.953255 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-10-09 10:16:50.953263 | orchestrator | 2025-10-09 10:16:50.953272 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:50.953285 | orchestrator | Thursday 09 October 2025 10:16:48 +0000 (0:00:00.361) 0:00:06.131 ****** 2025-10-09 10:16:50.953294 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-10-09 10:16:50.953302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-10-09 10:16:50.953311 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-10-09 10:16:50.953344 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-10-09 10:16:50.953353 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-10-09 10:16:50.953361 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-10-09 10:16:50.953370 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-10-09 10:16:50.953379 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-10-09 10:16:50.953387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-10-09 10:16:50.953395 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-10-09 10:16:50.953404 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-10-09 10:16:50.953412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-10-09 10:16:50.953421 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-10-09 10:16:50.953430 | orchestrator | 2025-10-09 10:16:50.953438 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:50.953447 | orchestrator | Thursday 09 October 2025 10:16:49 +0000 (0:00:00.410) 0:00:06.541 ****** 2025-10-09 10:16:50.953455 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:50.953464 | orchestrator | 2025-10-09 10:16:50.953472 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:50.953481 | orchestrator | Thursday 09 October 2025 10:16:49 +0000 (0:00:00.196) 0:00:06.738 ****** 2025-10-09 10:16:50.953490 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:50.953498 | orchestrator | 2025-10-09 10:16:50.953507 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:50.953516 | orchestrator | Thursday 09 October 2025 10:16:49 +0000 (0:00:00.202) 0:00:06.940 ****** 2025-10-09 10:16:50.953524 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:50.953533 | orchestrator | 2025-10-09 10:16:50.953541 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:50.953550 | orchestrator | Thursday 09 October 2025 10:16:49 +0000 (0:00:00.217) 0:00:07.158 ****** 2025-10-09 10:16:50.953559 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:50.953567 | orchestrator | 2025-10-09 10:16:50.953576 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:50.953585 | orchestrator | Thursday 09 October 2025 10:16:50 +0000 (0:00:00.192) 0:00:07.351 ****** 2025-10-09 10:16:50.953593 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:50.953602 | orchestrator | 2025-10-09 10:16:50.953617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:50.953626 | orchestrator | Thursday 09 October 2025 10:16:50 +0000 (0:00:00.211) 0:00:07.563 ****** 2025-10-09 10:16:50.953634 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:50.953643 | orchestrator | 2025-10-09 10:16:50.953652 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:50.953660 | orchestrator | Thursday 09 October 2025 10:16:50 +0000 (0:00:00.225) 0:00:07.788 ****** 2025-10-09 10:16:50.953669 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:50.953678 | orchestrator | 2025-10-09 10:16:50.953686 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:50.953695 | orchestrator | Thursday 09 October 2025 10:16:50 +0000 (0:00:00.201) 0:00:07.990 ****** 2025-10-09 10:16:50.953710 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.753870 | orchestrator | 2025-10-09 10:16:58.753953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:58.753968 | orchestrator | Thursday 09 October 2025 10:16:50 +0000 (0:00:00.202) 0:00:08.192 ****** 2025-10-09 10:16:58.753980 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-10-09 10:16:58.753993 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-10-09 10:16:58.754004 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-10-09 10:16:58.754067 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-10-09 10:16:58.754080 | orchestrator | 2025-10-09 10:16:58.754091 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:58.754102 | orchestrator | Thursday 09 October 2025 10:16:52 +0000 (0:00:01.102) 0:00:09.295 ****** 2025-10-09 10:16:58.754113 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.754123 | orchestrator | 2025-10-09 10:16:58.754134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:58.754145 | orchestrator | Thursday 09 October 2025 10:16:52 +0000 (0:00:00.226) 0:00:09.521 ****** 2025-10-09 10:16:58.754156 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.754166 | orchestrator | 2025-10-09 10:16:58.754177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:58.754188 | orchestrator | Thursday 09 October 2025 10:16:52 +0000 (0:00:00.220) 0:00:09.742 ****** 2025-10-09 10:16:58.754198 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.754209 | orchestrator | 2025-10-09 10:16:58.754220 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:16:58.754230 | orchestrator | Thursday 09 October 2025 10:16:52 +0000 (0:00:00.220) 0:00:09.963 ****** 2025-10-09 10:16:58.754241 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.754251 | orchestrator | 2025-10-09 10:16:58.754262 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-10-09 10:16:58.754273 | orchestrator | Thursday 09 October 2025 10:16:52 +0000 (0:00:00.212) 0:00:10.176 ****** 2025-10-09 10:16:58.754284 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-10-09 10:16:58.754294 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-10-09 10:16:58.754305 | orchestrator | 2025-10-09 10:16:58.754376 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-10-09 10:16:58.754388 | orchestrator | Thursday 09 October 2025 10:16:53 +0000 (0:00:00.183) 0:00:10.359 ****** 2025-10-09 10:16:58.754417 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.754430 | orchestrator | 2025-10-09 10:16:58.754442 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-10-09 10:16:58.754454 | orchestrator | Thursday 09 October 2025 10:16:53 +0000 (0:00:00.137) 0:00:10.497 ****** 2025-10-09 10:16:58.754467 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.754479 | orchestrator | 2025-10-09 10:16:58.754491 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-10-09 10:16:58.754504 | orchestrator | Thursday 09 October 2025 10:16:53 +0000 (0:00:00.137) 0:00:10.635 ****** 2025-10-09 10:16:58.754516 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.754552 | orchestrator | 2025-10-09 10:16:58.754565 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-10-09 10:16:58.754578 | orchestrator | Thursday 09 October 2025 10:16:53 +0000 (0:00:00.157) 0:00:10.792 ****** 2025-10-09 10:16:58.754589 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:16:58.754600 | orchestrator | 2025-10-09 10:16:58.754611 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-10-09 10:16:58.754622 | orchestrator | Thursday 09 October 2025 10:16:53 +0000 (0:00:00.133) 0:00:10.925 ****** 2025-10-09 10:16:58.754633 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0cbdaba5-e3a8-55ff-9207-33249002ea74'}}) 2025-10-09 10:16:58.754645 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0b8397ec-b473-5fab-a988-270c3fd4ebb0'}}) 2025-10-09 10:16:58.754656 | orchestrator | 2025-10-09 10:16:58.754667 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-10-09 10:16:58.754678 | orchestrator | Thursday 09 October 2025 10:16:53 +0000 (0:00:00.158) 0:00:11.084 ****** 2025-10-09 10:16:58.754689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0cbdaba5-e3a8-55ff-9207-33249002ea74'}})  2025-10-09 10:16:58.754709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0b8397ec-b473-5fab-a988-270c3fd4ebb0'}})  2025-10-09 10:16:58.754720 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.754731 | orchestrator | 2025-10-09 10:16:58.754741 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-10-09 10:16:58.754752 | orchestrator | Thursday 09 October 2025 10:16:53 +0000 (0:00:00.162) 0:00:11.247 ****** 2025-10-09 10:16:58.754763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0cbdaba5-e3a8-55ff-9207-33249002ea74'}})  2025-10-09 10:16:58.754774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0b8397ec-b473-5fab-a988-270c3fd4ebb0'}})  2025-10-09 10:16:58.754785 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.754796 | orchestrator | 2025-10-09 10:16:58.754807 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-10-09 10:16:58.754818 | orchestrator | Thursday 09 October 2025 10:16:54 +0000 (0:00:00.374) 0:00:11.622 ****** 2025-10-09 10:16:58.754829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0cbdaba5-e3a8-55ff-9207-33249002ea74'}})  2025-10-09 10:16:58.754840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0b8397ec-b473-5fab-a988-270c3fd4ebb0'}})  2025-10-09 10:16:58.754851 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.754862 | orchestrator | 2025-10-09 10:16:58.754888 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-10-09 10:16:58.754899 | orchestrator | Thursday 09 October 2025 10:16:54 +0000 (0:00:00.168) 0:00:11.790 ****** 2025-10-09 10:16:58.754910 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:16:58.754921 | orchestrator | 2025-10-09 10:16:58.754938 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-10-09 10:16:58.754949 | orchestrator | Thursday 09 October 2025 10:16:54 +0000 (0:00:00.171) 0:00:11.961 ****** 2025-10-09 10:16:58.754960 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:16:58.754971 | orchestrator | 2025-10-09 10:16:58.754981 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-10-09 10:16:58.754992 | orchestrator | Thursday 09 October 2025 10:16:54 +0000 (0:00:00.149) 0:00:12.110 ****** 2025-10-09 10:16:58.755003 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.755014 | orchestrator | 2025-10-09 10:16:58.755025 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-10-09 10:16:58.755035 | orchestrator | Thursday 09 October 2025 10:16:55 +0000 (0:00:00.149) 0:00:12.260 ****** 2025-10-09 10:16:58.755046 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.755057 | orchestrator | 2025-10-09 10:16:58.755075 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-10-09 10:16:58.755086 | orchestrator | Thursday 09 October 2025 10:16:55 +0000 (0:00:00.135) 0:00:12.395 ****** 2025-10-09 10:16:58.755096 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.755107 | orchestrator | 2025-10-09 10:16:58.755118 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-10-09 10:16:58.755129 | orchestrator | Thursday 09 October 2025 10:16:55 +0000 (0:00:00.133) 0:00:12.529 ****** 2025-10-09 10:16:58.755140 | orchestrator | ok: [testbed-node-3] => { 2025-10-09 10:16:58.755150 | orchestrator |  "ceph_osd_devices": { 2025-10-09 10:16:58.755161 | orchestrator |  "sdb": { 2025-10-09 10:16:58.755172 | orchestrator |  "osd_lvm_uuid": "0cbdaba5-e3a8-55ff-9207-33249002ea74" 2025-10-09 10:16:58.755184 | orchestrator |  }, 2025-10-09 10:16:58.755195 | orchestrator |  "sdc": { 2025-10-09 10:16:58.755205 | orchestrator |  "osd_lvm_uuid": "0b8397ec-b473-5fab-a988-270c3fd4ebb0" 2025-10-09 10:16:58.755216 | orchestrator |  } 2025-10-09 10:16:58.755227 | orchestrator |  } 2025-10-09 10:16:58.755238 | orchestrator | } 2025-10-09 10:16:58.755249 | orchestrator | 2025-10-09 10:16:58.755259 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-10-09 10:16:58.755271 | orchestrator | Thursday 09 October 2025 10:16:55 +0000 (0:00:00.165) 0:00:12.695 ****** 2025-10-09 10:16:58.755281 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.755292 | orchestrator | 2025-10-09 10:16:58.755303 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-10-09 10:16:58.755336 | orchestrator | Thursday 09 October 2025 10:16:55 +0000 (0:00:00.141) 0:00:12.836 ****** 2025-10-09 10:16:58.755347 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.755358 | orchestrator | 2025-10-09 10:16:58.755369 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-10-09 10:16:58.755380 | orchestrator | Thursday 09 October 2025 10:16:55 +0000 (0:00:00.138) 0:00:12.974 ****** 2025-10-09 10:16:58.755391 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:16:58.755401 | orchestrator | 2025-10-09 10:16:58.755412 | orchestrator | TASK [Print configuration data] ************************************************ 2025-10-09 10:16:58.755423 | orchestrator | Thursday 09 October 2025 10:16:55 +0000 (0:00:00.129) 0:00:13.104 ****** 2025-10-09 10:16:58.755434 | orchestrator | changed: [testbed-node-3] => { 2025-10-09 10:16:58.755445 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-10-09 10:16:58.755456 | orchestrator |  "ceph_osd_devices": { 2025-10-09 10:16:58.755467 | orchestrator |  "sdb": { 2025-10-09 10:16:58.755477 | orchestrator |  "osd_lvm_uuid": "0cbdaba5-e3a8-55ff-9207-33249002ea74" 2025-10-09 10:16:58.755489 | orchestrator |  }, 2025-10-09 10:16:58.755500 | orchestrator |  "sdc": { 2025-10-09 10:16:58.755510 | orchestrator |  "osd_lvm_uuid": "0b8397ec-b473-5fab-a988-270c3fd4ebb0" 2025-10-09 10:16:58.755521 | orchestrator |  } 2025-10-09 10:16:58.755532 | orchestrator |  }, 2025-10-09 10:16:58.755543 | orchestrator |  "lvm_volumes": [ 2025-10-09 10:16:58.755554 | orchestrator |  { 2025-10-09 10:16:58.755565 | orchestrator |  "data": "osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74", 2025-10-09 10:16:58.755575 | orchestrator |  "data_vg": "ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74" 2025-10-09 10:16:58.755586 | orchestrator |  }, 2025-10-09 10:16:58.755597 | orchestrator |  { 2025-10-09 10:16:58.755608 | orchestrator |  "data": "osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0", 2025-10-09 10:16:58.755618 | orchestrator |  "data_vg": "ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0" 2025-10-09 10:16:58.755629 | orchestrator |  } 2025-10-09 10:16:58.755640 | orchestrator |  ] 2025-10-09 10:16:58.755651 | orchestrator |  } 2025-10-09 10:16:58.755662 | orchestrator | } 2025-10-09 10:16:58.755672 | orchestrator | 2025-10-09 10:16:58.755688 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-10-09 10:16:58.755707 | orchestrator | Thursday 09 October 2025 10:16:56 +0000 (0:00:00.451) 0:00:13.555 ****** 2025-10-09 10:16:58.755718 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:16:58.755729 | orchestrator | 2025-10-09 10:16:58.755740 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-10-09 10:16:58.755751 | orchestrator | 2025-10-09 10:16:58.755762 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-10-09 10:16:58.755773 | orchestrator | Thursday 09 October 2025 10:16:58 +0000 (0:00:01.853) 0:00:15.409 ****** 2025-10-09 10:16:58.755784 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-10-09 10:16:58.755795 | orchestrator | 2025-10-09 10:16:58.755805 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-10-09 10:16:58.755816 | orchestrator | Thursday 09 October 2025 10:16:58 +0000 (0:00:00.293) 0:00:15.703 ****** 2025-10-09 10:16:58.755827 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:16:58.755838 | orchestrator | 2025-10-09 10:16:58.755849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:16:58.755867 | orchestrator | Thursday 09 October 2025 10:16:58 +0000 (0:00:00.288) 0:00:15.991 ****** 2025-10-09 10:17:08.042651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-10-09 10:17:08.042755 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-10-09 10:17:08.042770 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-10-09 10:17:08.042782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-10-09 10:17:08.042793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-10-09 10:17:08.042804 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-10-09 10:17:08.042815 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-10-09 10:17:08.042826 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-10-09 10:17:08.042837 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-10-09 10:17:08.042848 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-10-09 10:17:08.042859 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-10-09 10:17:08.042870 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-10-09 10:17:08.042881 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-10-09 10:17:08.042897 | orchestrator | 2025-10-09 10:17:08.042909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:08.042922 | orchestrator | Thursday 09 October 2025 10:16:59 +0000 (0:00:00.523) 0:00:16.514 ****** 2025-10-09 10:17:08.042934 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:08.042946 | orchestrator | 2025-10-09 10:17:08.042957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:08.042968 | orchestrator | Thursday 09 October 2025 10:16:59 +0000 (0:00:00.226) 0:00:16.741 ****** 2025-10-09 10:17:08.042979 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:08.042990 | orchestrator | 2025-10-09 10:17:08.043001 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:08.043012 | orchestrator | Thursday 09 October 2025 10:16:59 +0000 (0:00:00.273) 0:00:17.014 ****** 2025-10-09 10:17:08.043023 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:08.043034 | orchestrator | 2025-10-09 10:17:08.043045 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:08.043056 | orchestrator | Thursday 09 October 2025 10:16:59 +0000 (0:00:00.205) 0:00:17.220 ****** 2025-10-09 10:17:08.043067 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:08.043103 | orchestrator | 2025-10-09 10:17:08.043115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:08.043126 | orchestrator | Thursday 09 October 2025 10:17:00 +0000 (0:00:00.199) 0:00:17.419 ****** 2025-10-09 10:17:08.043137 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:08.043148 | orchestrator | 2025-10-09 10:17:08.043158 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:08.043169 | orchestrator | Thursday 09 October 2025 10:17:00 +0000 (0:00:00.654) 0:00:18.073 ****** 2025-10-09 10:17:08.043180 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:08.043193 | orchestrator | 2025-10-09 10:17:08.043206 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:08.043219 | orchestrator | Thursday 09 October 2025 10:17:01 +0000 (0:00:00.250) 0:00:18.323 ****** 2025-10-09 10:17:08.043249 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:08.043262 | orchestrator | 2025-10-09 10:17:08.043275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:08.043287 | orchestrator | Thursday 09 October 2025 10:17:01 +0000 (0:00:00.287) 0:00:18.611 ****** 2025-10-09 10:17:08.043300 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:08.043351 | orchestrator | 2025-10-09 10:17:08.043365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:08.043378 | orchestrator | Thursday 09 October 2025 10:17:01 +0000 (0:00:00.273) 0:00:18.885 ****** 2025-10-09 10:17:08.043390 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6) 2025-10-09 10:17:08.043405 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6) 2025-10-09 10:17:08.043417 | orchestrator | 2025-10-09 10:17:08.043429 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:08.043442 | orchestrator | Thursday 09 October 2025 10:17:02 +0000 (0:00:00.681) 0:00:19.567 ****** 2025-10-09 10:17:08.043454 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9e7febf8-8ec8-4679-b2bb-f3ad59f2c20d) 2025-10-09 10:17:08.043467 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9e7febf8-8ec8-4679-b2bb-f3ad59f2c20d) 2025-10-09 10:17:08.043480 | orchestrator | 2025-10-09 10:17:08.043493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:08.043506 | orchestrator | Thursday 09 October 2025 10:17:02 +0000 (0:00:00.471) 0:00:20.038 ****** 2025-10-09 10:17:08.043518 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fd778c69-d4e8-41af-bc93-131a1dca1168) 2025-10-09 10:17:08.043531 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fd778c69-d4e8-41af-bc93-131a1dca1168) 2025-10-09 10:17:08.043543 | orchestrator | 2025-10-09 10:17:08.043554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:08.043565 | orchestrator | Thursday 09 October 2025 10:17:03 +0000 (0:00:00.512) 0:00:20.550 ****** 2025-10-09 10:17:08.043592 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_96a31b72-79c3-475c-a7fa-14d6a4c6c9b3) 2025-10-09 10:17:08.043604 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_96a31b72-79c3-475c-a7fa-14d6a4c6c9b3) 2025-10-09 10:17:08.043615 | orchestrator | 2025-10-09 10:17:08.043626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:08.043637 | orchestrator | Thursday 09 October 2025 10:17:03 +0000 (0:00:00.516) 0:00:21.067 ****** 2025-10-09 10:17:08.043648 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-10-09 10:17:08.043659 | orchestrator | 2025-10-09 10:17:08.043670 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:08.043681 | orchestrator | Thursday 09 October 2025 10:17:04 +0000 (0:00:00.353) 0:00:21.421 ****** 2025-10-09 10:17:08.043692 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-10-09 10:17:08.043713 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-10-09 10:17:08.043724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-10-09 10:17:08.043735 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-10-09 10:17:08.043746 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-10-09 10:17:08.043757 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-10-09 10:17:08.043767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-10-09 10:17:08.043778 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-10-09 10:17:08.043789 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-10-09 10:17:08.043800 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-10-09 10:17:08.043811 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-10-09 10:17:08.043821 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-10-09 10:17:08.043832 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-10-09 10:17:08.043843 | orchestrator | 2025-10-09 10:17:08.043854 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:08.043865 | orchestrator | Thursday 09 October 2025 10:17:04 +0000 (0:00:00.406) 0:00:21.827 ****** 2025-10-09 10:17:08.043876 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:08.043887 | orchestrator | 2025-10-09 10:17:08.043898 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:08.043909 | orchestrator | Thursday 09 October 2025 10:17:05 +0000 (0:00:00.777) 0:00:22.605 ****** 2025-10-09 10:17:08.043919 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:08.043930 | orchestrator | 2025-10-09 10:17:08.043948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:08.043959 | orchestrator | Thursday 09 October 2025 10:17:05 +0000 (0:00:00.227) 0:00:22.833 ****** 2025-10-09 10:17:08.043970 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:08.043980 | orchestrator | 2025-10-09 10:17:08.043991 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:08.044002 | orchestrator | Thursday 09 October 2025 10:17:05 +0000 (0:00:00.221) 0:00:23.055 ****** 2025-10-09 10:17:08.044013 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:08.044024 | orchestrator | 2025-10-09 10:17:08.044035 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:08.044046 | orchestrator | Thursday 09 October 2025 10:17:06 +0000 (0:00:00.209) 0:00:23.264 ****** 2025-10-09 10:17:08.044056 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:08.044067 | orchestrator | 2025-10-09 10:17:08.044078 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:08.044089 | orchestrator | Thursday 09 October 2025 10:17:06 +0000 (0:00:00.206) 0:00:23.470 ****** 2025-10-09 10:17:08.044100 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:08.044111 | orchestrator | 2025-10-09 10:17:08.044122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:08.044132 | orchestrator | Thursday 09 October 2025 10:17:06 +0000 (0:00:00.231) 0:00:23.702 ****** 2025-10-09 10:17:08.044143 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:08.044154 | orchestrator | 2025-10-09 10:17:08.044165 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:08.044176 | orchestrator | Thursday 09 October 2025 10:17:06 +0000 (0:00:00.204) 0:00:23.906 ****** 2025-10-09 10:17:08.044187 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:08.044198 | orchestrator | 2025-10-09 10:17:08.044208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:08.044226 | orchestrator | Thursday 09 October 2025 10:17:06 +0000 (0:00:00.225) 0:00:24.132 ****** 2025-10-09 10:17:08.044237 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-10-09 10:17:08.044249 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-10-09 10:17:08.044260 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-10-09 10:17:08.044271 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-10-09 10:17:08.044282 | orchestrator | 2025-10-09 10:17:08.044293 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:08.044304 | orchestrator | Thursday 09 October 2025 10:17:07 +0000 (0:00:00.906) 0:00:25.038 ****** 2025-10-09 10:17:08.044331 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:08.044342 | orchestrator | 2025-10-09 10:17:08.044359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:15.045782 | orchestrator | Thursday 09 October 2025 10:17:08 +0000 (0:00:00.240) 0:00:25.279 ****** 2025-10-09 10:17:15.045871 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:15.045885 | orchestrator | 2025-10-09 10:17:15.045897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:15.045909 | orchestrator | Thursday 09 October 2025 10:17:08 +0000 (0:00:00.209) 0:00:25.489 ****** 2025-10-09 10:17:15.045920 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:15.045931 | orchestrator | 2025-10-09 10:17:15.045942 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:15.045953 | orchestrator | Thursday 09 October 2025 10:17:08 +0000 (0:00:00.246) 0:00:25.735 ****** 2025-10-09 10:17:15.045964 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:15.045975 | orchestrator | 2025-10-09 10:17:15.045986 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-10-09 10:17:15.045997 | orchestrator | Thursday 09 October 2025 10:17:09 +0000 (0:00:00.763) 0:00:26.499 ****** 2025-10-09 10:17:15.046007 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-10-09 10:17:15.046077 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-10-09 10:17:15.046089 | orchestrator | 2025-10-09 10:17:15.046100 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-10-09 10:17:15.046111 | orchestrator | Thursday 09 October 2025 10:17:09 +0000 (0:00:00.185) 0:00:26.685 ****** 2025-10-09 10:17:15.046121 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:15.046132 | orchestrator | 2025-10-09 10:17:15.046143 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-10-09 10:17:15.046154 | orchestrator | Thursday 09 October 2025 10:17:09 +0000 (0:00:00.148) 0:00:26.834 ****** 2025-10-09 10:17:15.046164 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:15.046175 | orchestrator | 2025-10-09 10:17:15.046185 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-10-09 10:17:15.046196 | orchestrator | Thursday 09 October 2025 10:17:09 +0000 (0:00:00.183) 0:00:27.017 ****** 2025-10-09 10:17:15.046207 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:15.046218 | orchestrator | 2025-10-09 10:17:15.046228 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-10-09 10:17:15.046239 | orchestrator | Thursday 09 October 2025 10:17:09 +0000 (0:00:00.168) 0:00:27.186 ****** 2025-10-09 10:17:15.046250 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:17:15.046261 | orchestrator | 2025-10-09 10:17:15.046272 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-10-09 10:17:15.046283 | orchestrator | Thursday 09 October 2025 10:17:10 +0000 (0:00:00.131) 0:00:27.317 ****** 2025-10-09 10:17:15.046294 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'}}) 2025-10-09 10:17:15.046305 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'db411f8a-05b0-54f7-b748-fd517a3c676f'}}) 2025-10-09 10:17:15.046343 | orchestrator | 2025-10-09 10:17:15.046356 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-10-09 10:17:15.046392 | orchestrator | Thursday 09 October 2025 10:17:10 +0000 (0:00:00.188) 0:00:27.505 ****** 2025-10-09 10:17:15.046406 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'}})  2025-10-09 10:17:15.046421 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'db411f8a-05b0-54f7-b748-fd517a3c676f'}})  2025-10-09 10:17:15.046434 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:15.046446 | orchestrator | 2025-10-09 10:17:15.046476 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-10-09 10:17:15.046489 | orchestrator | Thursday 09 October 2025 10:17:10 +0000 (0:00:00.163) 0:00:27.669 ****** 2025-10-09 10:17:15.046501 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'}})  2025-10-09 10:17:15.046514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'db411f8a-05b0-54f7-b748-fd517a3c676f'}})  2025-10-09 10:17:15.046527 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:15.046539 | orchestrator | 2025-10-09 10:17:15.046551 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-10-09 10:17:15.046564 | orchestrator | Thursday 09 October 2025 10:17:10 +0000 (0:00:00.161) 0:00:27.831 ****** 2025-10-09 10:17:15.046576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'}})  2025-10-09 10:17:15.046588 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'db411f8a-05b0-54f7-b748-fd517a3c676f'}})  2025-10-09 10:17:15.046601 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:15.046614 | orchestrator | 2025-10-09 10:17:15.046626 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-10-09 10:17:15.046638 | orchestrator | Thursday 09 October 2025 10:17:10 +0000 (0:00:00.158) 0:00:27.990 ****** 2025-10-09 10:17:15.046651 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:17:15.046663 | orchestrator | 2025-10-09 10:17:15.046675 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-10-09 10:17:15.046687 | orchestrator | Thursday 09 October 2025 10:17:10 +0000 (0:00:00.186) 0:00:28.176 ****** 2025-10-09 10:17:15.046697 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:17:15.046708 | orchestrator | 2025-10-09 10:17:15.046719 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-10-09 10:17:15.046730 | orchestrator | Thursday 09 October 2025 10:17:11 +0000 (0:00:00.150) 0:00:28.327 ****** 2025-10-09 10:17:15.046741 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:15.046751 | orchestrator | 2025-10-09 10:17:15.046779 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-10-09 10:17:15.046790 | orchestrator | Thursday 09 October 2025 10:17:11 +0000 (0:00:00.360) 0:00:28.687 ****** 2025-10-09 10:17:15.046801 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:15.046812 | orchestrator | 2025-10-09 10:17:15.046822 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-10-09 10:17:15.046833 | orchestrator | Thursday 09 October 2025 10:17:11 +0000 (0:00:00.135) 0:00:28.823 ****** 2025-10-09 10:17:15.046844 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:15.046855 | orchestrator | 2025-10-09 10:17:15.046866 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-10-09 10:17:15.046876 | orchestrator | Thursday 09 October 2025 10:17:11 +0000 (0:00:00.157) 0:00:28.980 ****** 2025-10-09 10:17:15.046887 | orchestrator | ok: [testbed-node-4] => { 2025-10-09 10:17:15.046898 | orchestrator |  "ceph_osd_devices": { 2025-10-09 10:17:15.046909 | orchestrator |  "sdb": { 2025-10-09 10:17:15.046920 | orchestrator |  "osd_lvm_uuid": "bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee" 2025-10-09 10:17:15.046931 | orchestrator |  }, 2025-10-09 10:17:15.046942 | orchestrator |  "sdc": { 2025-10-09 10:17:15.046963 | orchestrator |  "osd_lvm_uuid": "db411f8a-05b0-54f7-b748-fd517a3c676f" 2025-10-09 10:17:15.046973 | orchestrator |  } 2025-10-09 10:17:15.046984 | orchestrator |  } 2025-10-09 10:17:15.046995 | orchestrator | } 2025-10-09 10:17:15.047006 | orchestrator | 2025-10-09 10:17:15.047017 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-10-09 10:17:15.047028 | orchestrator | Thursday 09 October 2025 10:17:11 +0000 (0:00:00.131) 0:00:29.112 ****** 2025-10-09 10:17:15.047039 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:15.047049 | orchestrator | 2025-10-09 10:17:15.047060 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-10-09 10:17:15.047071 | orchestrator | Thursday 09 October 2025 10:17:12 +0000 (0:00:00.145) 0:00:29.257 ****** 2025-10-09 10:17:15.047082 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:15.047092 | orchestrator | 2025-10-09 10:17:15.047103 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-10-09 10:17:15.047114 | orchestrator | Thursday 09 October 2025 10:17:12 +0000 (0:00:00.140) 0:00:29.398 ****** 2025-10-09 10:17:15.047124 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:17:15.047135 | orchestrator | 2025-10-09 10:17:15.047146 | orchestrator | TASK [Print configuration data] ************************************************ 2025-10-09 10:17:15.047157 | orchestrator | Thursday 09 October 2025 10:17:12 +0000 (0:00:00.236) 0:00:29.634 ****** 2025-10-09 10:17:15.047168 | orchestrator | changed: [testbed-node-4] => { 2025-10-09 10:17:15.047178 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-10-09 10:17:15.047189 | orchestrator |  "ceph_osd_devices": { 2025-10-09 10:17:15.047200 | orchestrator |  "sdb": { 2025-10-09 10:17:15.047211 | orchestrator |  "osd_lvm_uuid": "bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee" 2025-10-09 10:17:15.047222 | orchestrator |  }, 2025-10-09 10:17:15.047233 | orchestrator |  "sdc": { 2025-10-09 10:17:15.047244 | orchestrator |  "osd_lvm_uuid": "db411f8a-05b0-54f7-b748-fd517a3c676f" 2025-10-09 10:17:15.047254 | orchestrator |  } 2025-10-09 10:17:15.047265 | orchestrator |  }, 2025-10-09 10:17:15.047276 | orchestrator |  "lvm_volumes": [ 2025-10-09 10:17:15.047287 | orchestrator |  { 2025-10-09 10:17:15.047297 | orchestrator |  "data": "osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee", 2025-10-09 10:17:15.047308 | orchestrator |  "data_vg": "ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee" 2025-10-09 10:17:15.047344 | orchestrator |  }, 2025-10-09 10:17:15.047355 | orchestrator |  { 2025-10-09 10:17:15.047365 | orchestrator |  "data": "osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f", 2025-10-09 10:17:15.047376 | orchestrator |  "data_vg": "ceph-db411f8a-05b0-54f7-b748-fd517a3c676f" 2025-10-09 10:17:15.047387 | orchestrator |  } 2025-10-09 10:17:15.047397 | orchestrator |  ] 2025-10-09 10:17:15.047408 | orchestrator |  } 2025-10-09 10:17:15.047418 | orchestrator | } 2025-10-09 10:17:15.047429 | orchestrator | 2025-10-09 10:17:15.047440 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-10-09 10:17:15.047450 | orchestrator | Thursday 09 October 2025 10:17:12 +0000 (0:00:00.257) 0:00:29.892 ****** 2025-10-09 10:17:15.047461 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-10-09 10:17:15.047472 | orchestrator | 2025-10-09 10:17:15.047482 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-10-09 10:17:15.047493 | orchestrator | 2025-10-09 10:17:15.047504 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-10-09 10:17:15.047514 | orchestrator | Thursday 09 October 2025 10:17:13 +0000 (0:00:01.197) 0:00:31.090 ****** 2025-10-09 10:17:15.047525 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-10-09 10:17:15.047536 | orchestrator | 2025-10-09 10:17:15.047547 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-10-09 10:17:15.047557 | orchestrator | Thursday 09 October 2025 10:17:14 +0000 (0:00:00.669) 0:00:31.759 ****** 2025-10-09 10:17:15.047575 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:17:15.047586 | orchestrator | 2025-10-09 10:17:15.047603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:15.047614 | orchestrator | Thursday 09 October 2025 10:17:14 +0000 (0:00:00.198) 0:00:31.958 ****** 2025-10-09 10:17:15.047625 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-10-09 10:17:15.047636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-10-09 10:17:15.047646 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-10-09 10:17:15.047657 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-10-09 10:17:15.047668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-10-09 10:17:15.047679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-10-09 10:17:15.047695 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-10-09 10:17:22.477785 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-10-09 10:17:22.477885 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-10-09 10:17:22.477900 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-10-09 10:17:22.477911 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-10-09 10:17:22.477922 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-10-09 10:17:22.477933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-10-09 10:17:22.477944 | orchestrator | 2025-10-09 10:17:22.477956 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:22.477968 | orchestrator | Thursday 09 October 2025 10:17:15 +0000 (0:00:00.325) 0:00:32.284 ****** 2025-10-09 10:17:22.477979 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:22.477991 | orchestrator | 2025-10-09 10:17:22.478002 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:22.478013 | orchestrator | Thursday 09 October 2025 10:17:15 +0000 (0:00:00.171) 0:00:32.456 ****** 2025-10-09 10:17:22.478080 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:22.478092 | orchestrator | 2025-10-09 10:17:22.478103 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:22.478114 | orchestrator | Thursday 09 October 2025 10:17:15 +0000 (0:00:00.173) 0:00:32.630 ****** 2025-10-09 10:17:22.478125 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:22.478135 | orchestrator | 2025-10-09 10:17:22.478146 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:22.478157 | orchestrator | Thursday 09 October 2025 10:17:15 +0000 (0:00:00.156) 0:00:32.786 ****** 2025-10-09 10:17:22.478168 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:22.478179 | orchestrator | 2025-10-09 10:17:22.478190 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:22.478201 | orchestrator | Thursday 09 October 2025 10:17:15 +0000 (0:00:00.172) 0:00:32.958 ****** 2025-10-09 10:17:22.478211 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:22.478222 | orchestrator | 2025-10-09 10:17:22.478233 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:22.478244 | orchestrator | Thursday 09 October 2025 10:17:15 +0000 (0:00:00.171) 0:00:33.130 ****** 2025-10-09 10:17:22.478255 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:22.478265 | orchestrator | 2025-10-09 10:17:22.478276 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:22.478287 | orchestrator | Thursday 09 October 2025 10:17:16 +0000 (0:00:00.174) 0:00:33.304 ****** 2025-10-09 10:17:22.478298 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:22.478410 | orchestrator | 2025-10-09 10:17:22.478425 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:22.478437 | orchestrator | Thursday 09 October 2025 10:17:16 +0000 (0:00:00.173) 0:00:33.478 ****** 2025-10-09 10:17:22.478449 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:22.478462 | orchestrator | 2025-10-09 10:17:22.478474 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:22.478486 | orchestrator | Thursday 09 October 2025 10:17:16 +0000 (0:00:00.150) 0:00:33.628 ****** 2025-10-09 10:17:22.478499 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0) 2025-10-09 10:17:22.478513 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0) 2025-10-09 10:17:22.478526 | orchestrator | 2025-10-09 10:17:22.478538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:22.478550 | orchestrator | Thursday 09 October 2025 10:17:17 +0000 (0:00:00.681) 0:00:34.310 ****** 2025-10-09 10:17:22.478562 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6ad7b454-0b43-4b47-a404-c2fa6c30a397) 2025-10-09 10:17:22.478574 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6ad7b454-0b43-4b47-a404-c2fa6c30a397) 2025-10-09 10:17:22.478586 | orchestrator | 2025-10-09 10:17:22.478598 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:22.478611 | orchestrator | Thursday 09 October 2025 10:17:17 +0000 (0:00:00.462) 0:00:34.773 ****** 2025-10-09 10:17:22.478622 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_46e0cf8b-6c4d-4615-bce2-a8b81f113425) 2025-10-09 10:17:22.478635 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_46e0cf8b-6c4d-4615-bce2-a8b81f113425) 2025-10-09 10:17:22.478647 | orchestrator | 2025-10-09 10:17:22.478660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:22.478671 | orchestrator | Thursday 09 October 2025 10:17:17 +0000 (0:00:00.463) 0:00:35.237 ****** 2025-10-09 10:17:22.478681 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_94b6a137-07a9-47a7-90bd-af13afc1319f) 2025-10-09 10:17:22.478692 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_94b6a137-07a9-47a7-90bd-af13afc1319f) 2025-10-09 10:17:22.478703 | orchestrator | 2025-10-09 10:17:22.478714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:17:22.478725 | orchestrator | Thursday 09 October 2025 10:17:18 +0000 (0:00:00.473) 0:00:35.710 ****** 2025-10-09 10:17:22.478736 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-10-09 10:17:22.478746 | orchestrator | 2025-10-09 10:17:22.478757 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:22.478768 | orchestrator | Thursday 09 October 2025 10:17:18 +0000 (0:00:00.329) 0:00:36.039 ****** 2025-10-09 10:17:22.478796 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-10-09 10:17:22.478808 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-10-09 10:17:22.478819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-10-09 10:17:22.478830 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-10-09 10:17:22.478840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-10-09 10:17:22.478851 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-10-09 10:17:22.478877 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-10-09 10:17:22.478889 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-10-09 10:17:22.478901 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-10-09 10:17:22.478919 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-10-09 10:17:22.478929 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-10-09 10:17:22.478940 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-10-09 10:17:22.478951 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-10-09 10:17:22.478962 | orchestrator | 2025-10-09 10:17:22.478973 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:22.478984 | orchestrator | Thursday 09 October 2025 10:17:19 +0000 (0:00:00.372) 0:00:36.411 ****** 2025-10-09 10:17:22.478994 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:22.479005 | orchestrator | 2025-10-09 10:17:22.479016 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:22.479027 | orchestrator | Thursday 09 October 2025 10:17:19 +0000 (0:00:00.227) 0:00:36.639 ****** 2025-10-09 10:17:22.479038 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:22.479049 | orchestrator | 2025-10-09 10:17:22.479059 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:22.479070 | orchestrator | Thursday 09 October 2025 10:17:19 +0000 (0:00:00.175) 0:00:36.815 ****** 2025-10-09 10:17:22.479081 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:22.479092 | orchestrator | 2025-10-09 10:17:22.479107 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:22.479119 | orchestrator | Thursday 09 October 2025 10:17:19 +0000 (0:00:00.166) 0:00:36.981 ****** 2025-10-09 10:17:22.479130 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:22.479141 | orchestrator | 2025-10-09 10:17:22.479151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:22.479162 | orchestrator | Thursday 09 October 2025 10:17:19 +0000 (0:00:00.199) 0:00:37.181 ****** 2025-10-09 10:17:22.479173 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:22.479184 | orchestrator | 2025-10-09 10:17:22.479195 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:22.479205 | orchestrator | Thursday 09 October 2025 10:17:20 +0000 (0:00:00.165) 0:00:37.346 ****** 2025-10-09 10:17:22.479216 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:22.479227 | orchestrator | 2025-10-09 10:17:22.479238 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:22.479249 | orchestrator | Thursday 09 October 2025 10:17:20 +0000 (0:00:00.491) 0:00:37.838 ****** 2025-10-09 10:17:22.479260 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:22.479270 | orchestrator | 2025-10-09 10:17:22.479281 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:22.479292 | orchestrator | Thursday 09 October 2025 10:17:20 +0000 (0:00:00.189) 0:00:38.027 ****** 2025-10-09 10:17:22.479303 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:22.479334 | orchestrator | 2025-10-09 10:17:22.479346 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:22.479357 | orchestrator | Thursday 09 October 2025 10:17:20 +0000 (0:00:00.181) 0:00:38.209 ****** 2025-10-09 10:17:22.479368 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-10-09 10:17:22.479379 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-10-09 10:17:22.479390 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-10-09 10:17:22.479401 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-10-09 10:17:22.479412 | orchestrator | 2025-10-09 10:17:22.479423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:22.479434 | orchestrator | Thursday 09 October 2025 10:17:21 +0000 (0:00:00.592) 0:00:38.801 ****** 2025-10-09 10:17:22.479445 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:22.479456 | orchestrator | 2025-10-09 10:17:22.479466 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:22.479484 | orchestrator | Thursday 09 October 2025 10:17:21 +0000 (0:00:00.190) 0:00:38.992 ****** 2025-10-09 10:17:22.479495 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:22.479506 | orchestrator | 2025-10-09 10:17:22.479517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:22.479528 | orchestrator | Thursday 09 October 2025 10:17:21 +0000 (0:00:00.201) 0:00:39.193 ****** 2025-10-09 10:17:22.479539 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:22.479550 | orchestrator | 2025-10-09 10:17:22.479561 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:17:22.479572 | orchestrator | Thursday 09 October 2025 10:17:22 +0000 (0:00:00.217) 0:00:39.411 ****** 2025-10-09 10:17:22.479583 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:22.479594 | orchestrator | 2025-10-09 10:17:22.479605 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-10-09 10:17:22.479622 | orchestrator | Thursday 09 October 2025 10:17:22 +0000 (0:00:00.304) 0:00:39.716 ****** 2025-10-09 10:17:27.264958 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-10-09 10:17:27.265050 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-10-09 10:17:27.265064 | orchestrator | 2025-10-09 10:17:27.265078 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-10-09 10:17:27.265090 | orchestrator | Thursday 09 October 2025 10:17:22 +0000 (0:00:00.196) 0:00:39.913 ****** 2025-10-09 10:17:27.265101 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:27.265112 | orchestrator | 2025-10-09 10:17:27.265124 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-10-09 10:17:27.265135 | orchestrator | Thursday 09 October 2025 10:17:22 +0000 (0:00:00.193) 0:00:40.107 ****** 2025-10-09 10:17:27.265145 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:27.265156 | orchestrator | 2025-10-09 10:17:27.265167 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-10-09 10:17:27.265178 | orchestrator | Thursday 09 October 2025 10:17:23 +0000 (0:00:00.146) 0:00:40.253 ****** 2025-10-09 10:17:27.265189 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:27.265200 | orchestrator | 2025-10-09 10:17:27.265211 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-10-09 10:17:27.265222 | orchestrator | Thursday 09 October 2025 10:17:23 +0000 (0:00:00.368) 0:00:40.622 ****** 2025-10-09 10:17:27.265232 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:17:27.265244 | orchestrator | 2025-10-09 10:17:27.265255 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-10-09 10:17:27.265266 | orchestrator | Thursday 09 October 2025 10:17:23 +0000 (0:00:00.153) 0:00:40.776 ****** 2025-10-09 10:17:27.265278 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '83d577c9-ff1a-5f1d-bd0e-44f99d742f78'}}) 2025-10-09 10:17:27.265290 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8ce20a60-fba3-5536-8b48-1e48c039a9b4'}}) 2025-10-09 10:17:27.265301 | orchestrator | 2025-10-09 10:17:27.265344 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-10-09 10:17:27.265356 | orchestrator | Thursday 09 October 2025 10:17:23 +0000 (0:00:00.196) 0:00:40.972 ****** 2025-10-09 10:17:27.265368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '83d577c9-ff1a-5f1d-bd0e-44f99d742f78'}})  2025-10-09 10:17:27.265382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8ce20a60-fba3-5536-8b48-1e48c039a9b4'}})  2025-10-09 10:17:27.265393 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:27.265404 | orchestrator | 2025-10-09 10:17:27.265415 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-10-09 10:17:27.265426 | orchestrator | Thursday 09 October 2025 10:17:23 +0000 (0:00:00.180) 0:00:41.152 ****** 2025-10-09 10:17:27.265436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '83d577c9-ff1a-5f1d-bd0e-44f99d742f78'}})  2025-10-09 10:17:27.265474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8ce20a60-fba3-5536-8b48-1e48c039a9b4'}})  2025-10-09 10:17:27.265486 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:27.265498 | orchestrator | 2025-10-09 10:17:27.265511 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-10-09 10:17:27.265523 | orchestrator | Thursday 09 October 2025 10:17:24 +0000 (0:00:00.203) 0:00:41.356 ****** 2025-10-09 10:17:27.265536 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '83d577c9-ff1a-5f1d-bd0e-44f99d742f78'}})  2025-10-09 10:17:27.265565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8ce20a60-fba3-5536-8b48-1e48c039a9b4'}})  2025-10-09 10:17:27.265578 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:27.265590 | orchestrator | 2025-10-09 10:17:27.265602 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-10-09 10:17:27.265615 | orchestrator | Thursday 09 October 2025 10:17:24 +0000 (0:00:00.170) 0:00:41.526 ****** 2025-10-09 10:17:27.265627 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:17:27.265639 | orchestrator | 2025-10-09 10:17:27.265651 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-10-09 10:17:27.265664 | orchestrator | Thursday 09 October 2025 10:17:24 +0000 (0:00:00.160) 0:00:41.687 ****** 2025-10-09 10:17:27.265676 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:17:27.265688 | orchestrator | 2025-10-09 10:17:27.265701 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-10-09 10:17:27.265713 | orchestrator | Thursday 09 October 2025 10:17:24 +0000 (0:00:00.161) 0:00:41.849 ****** 2025-10-09 10:17:27.265726 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:27.265738 | orchestrator | 2025-10-09 10:17:27.265750 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-10-09 10:17:27.265763 | orchestrator | Thursday 09 October 2025 10:17:24 +0000 (0:00:00.169) 0:00:42.018 ****** 2025-10-09 10:17:27.265775 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:27.265787 | orchestrator | 2025-10-09 10:17:27.265800 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-10-09 10:17:27.265812 | orchestrator | Thursday 09 October 2025 10:17:24 +0000 (0:00:00.169) 0:00:42.188 ****** 2025-10-09 10:17:27.265825 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:27.265837 | orchestrator | 2025-10-09 10:17:27.265849 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-10-09 10:17:27.265859 | orchestrator | Thursday 09 October 2025 10:17:25 +0000 (0:00:00.149) 0:00:42.337 ****** 2025-10-09 10:17:27.265870 | orchestrator | ok: [testbed-node-5] => { 2025-10-09 10:17:27.265881 | orchestrator |  "ceph_osd_devices": { 2025-10-09 10:17:27.265892 | orchestrator |  "sdb": { 2025-10-09 10:17:27.265903 | orchestrator |  "osd_lvm_uuid": "83d577c9-ff1a-5f1d-bd0e-44f99d742f78" 2025-10-09 10:17:27.265931 | orchestrator |  }, 2025-10-09 10:17:27.265943 | orchestrator |  "sdc": { 2025-10-09 10:17:27.265954 | orchestrator |  "osd_lvm_uuid": "8ce20a60-fba3-5536-8b48-1e48c039a9b4" 2025-10-09 10:17:27.265965 | orchestrator |  } 2025-10-09 10:17:27.265976 | orchestrator |  } 2025-10-09 10:17:27.265987 | orchestrator | } 2025-10-09 10:17:27.265999 | orchestrator | 2025-10-09 10:17:27.266010 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-10-09 10:17:27.266067 | orchestrator | Thursday 09 October 2025 10:17:25 +0000 (0:00:00.148) 0:00:42.485 ****** 2025-10-09 10:17:27.266079 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:27.266090 | orchestrator | 2025-10-09 10:17:27.266101 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-10-09 10:17:27.266112 | orchestrator | Thursday 09 October 2025 10:17:25 +0000 (0:00:00.138) 0:00:42.624 ****** 2025-10-09 10:17:27.266123 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:27.266134 | orchestrator | 2025-10-09 10:17:27.266145 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-10-09 10:17:27.266167 | orchestrator | Thursday 09 October 2025 10:17:25 +0000 (0:00:00.400) 0:00:43.024 ****** 2025-10-09 10:17:27.266178 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:17:27.266189 | orchestrator | 2025-10-09 10:17:27.266200 | orchestrator | TASK [Print configuration data] ************************************************ 2025-10-09 10:17:27.266211 | orchestrator | Thursday 09 October 2025 10:17:25 +0000 (0:00:00.150) 0:00:43.174 ****** 2025-10-09 10:17:27.266222 | orchestrator | changed: [testbed-node-5] => { 2025-10-09 10:17:27.266233 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-10-09 10:17:27.266244 | orchestrator |  "ceph_osd_devices": { 2025-10-09 10:17:27.266256 | orchestrator |  "sdb": { 2025-10-09 10:17:27.266267 | orchestrator |  "osd_lvm_uuid": "83d577c9-ff1a-5f1d-bd0e-44f99d742f78" 2025-10-09 10:17:27.266278 | orchestrator |  }, 2025-10-09 10:17:27.266289 | orchestrator |  "sdc": { 2025-10-09 10:17:27.266300 | orchestrator |  "osd_lvm_uuid": "8ce20a60-fba3-5536-8b48-1e48c039a9b4" 2025-10-09 10:17:27.266330 | orchestrator |  } 2025-10-09 10:17:27.266342 | orchestrator |  }, 2025-10-09 10:17:27.266352 | orchestrator |  "lvm_volumes": [ 2025-10-09 10:17:27.266363 | orchestrator |  { 2025-10-09 10:17:27.266374 | orchestrator |  "data": "osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78", 2025-10-09 10:17:27.266386 | orchestrator |  "data_vg": "ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78" 2025-10-09 10:17:27.266396 | orchestrator |  }, 2025-10-09 10:17:27.266407 | orchestrator |  { 2025-10-09 10:17:27.266418 | orchestrator |  "data": "osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4", 2025-10-09 10:17:27.266429 | orchestrator |  "data_vg": "ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4" 2025-10-09 10:17:27.266440 | orchestrator |  } 2025-10-09 10:17:27.266451 | orchestrator |  ] 2025-10-09 10:17:27.266462 | orchestrator |  } 2025-10-09 10:17:27.266477 | orchestrator | } 2025-10-09 10:17:27.266489 | orchestrator | 2025-10-09 10:17:27.266500 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-10-09 10:17:27.266511 | orchestrator | Thursday 09 October 2025 10:17:26 +0000 (0:00:00.252) 0:00:43.427 ****** 2025-10-09 10:17:27.266522 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-10-09 10:17:27.266532 | orchestrator | 2025-10-09 10:17:27.266543 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:17:27.266555 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-10-09 10:17:27.266567 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-10-09 10:17:27.266578 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-10-09 10:17:27.266589 | orchestrator | 2025-10-09 10:17:27.266600 | orchestrator | 2025-10-09 10:17:27.266611 | orchestrator | 2025-10-09 10:17:27.266621 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:17:27.266632 | orchestrator | Thursday 09 October 2025 10:17:27 +0000 (0:00:01.065) 0:00:44.493 ****** 2025-10-09 10:17:27.266643 | orchestrator | =============================================================================== 2025-10-09 10:17:27.266654 | orchestrator | Write configuration file ------------------------------------------------ 4.12s 2025-10-09 10:17:27.266665 | orchestrator | Add known links to the list of available block devices ------------------ 1.34s 2025-10-09 10:17:27.266675 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.22s 2025-10-09 10:17:27.266686 | orchestrator | Add known partitions to the list of available block devices ------------- 1.19s 2025-10-09 10:17:27.266697 | orchestrator | Add known partitions to the list of available block devices ------------- 1.10s 2025-10-09 10:17:27.266715 | orchestrator | Print configuration data ------------------------------------------------ 0.96s 2025-10-09 10:17:27.266726 | orchestrator | Add known links to the list of available block devices ------------------ 0.95s 2025-10-09 10:17:27.266737 | orchestrator | Add known partitions to the list of available block devices ------------- 0.91s 2025-10-09 10:17:27.266748 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2025-10-09 10:17:27.266758 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2025-10-09 10:17:27.266769 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.74s 2025-10-09 10:17:27.266780 | orchestrator | Get initial list of available block devices ----------------------------- 0.73s 2025-10-09 10:17:27.266790 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.69s 2025-10-09 10:17:27.266801 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2025-10-09 10:17:27.266820 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2025-10-09 10:17:27.698888 | orchestrator | Print DB devices -------------------------------------------------------- 0.68s 2025-10-09 10:17:27.698988 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2025-10-09 10:17:27.699002 | orchestrator | Set DB devices config data ---------------------------------------------- 0.68s 2025-10-09 10:17:27.699014 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-10-09 10:17:27.699025 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-10-09 10:17:50.459938 | orchestrator | 2025-10-09 10:17:50 | INFO  | Task 0f1bc74d-407c-4293-a9b4-f0eb2c195794 (sync inventory) is running in background. Output coming soon. 2025-10-09 10:18:18.556775 | orchestrator | 2025-10-09 10:17:51 | INFO  | Starting group_vars file reorganization 2025-10-09 10:18:18.556876 | orchestrator | 2025-10-09 10:17:51 | INFO  | Moved 0 file(s) to their respective directories 2025-10-09 10:18:18.556892 | orchestrator | 2025-10-09 10:17:51 | INFO  | Group_vars file reorganization completed 2025-10-09 10:18:18.556905 | orchestrator | 2025-10-09 10:17:54 | INFO  | Starting variable preparation from inventory 2025-10-09 10:18:18.556917 | orchestrator | 2025-10-09 10:17:58 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-10-09 10:18:18.556928 | orchestrator | 2025-10-09 10:17:58 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-10-09 10:18:18.556940 | orchestrator | 2025-10-09 10:17:58 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-10-09 10:18:18.556974 | orchestrator | 2025-10-09 10:17:58 | INFO  | 3 file(s) written, 6 host(s) processed 2025-10-09 10:18:18.556986 | orchestrator | 2025-10-09 10:17:58 | INFO  | Variable preparation completed 2025-10-09 10:18:18.556997 | orchestrator | 2025-10-09 10:17:59 | INFO  | Starting inventory overwrite handling 2025-10-09 10:18:18.557009 | orchestrator | 2025-10-09 10:17:59 | INFO  | Handling group overwrites in 99-overwrite 2025-10-09 10:18:18.557026 | orchestrator | 2025-10-09 10:17:59 | INFO  | Removing group frr:children from 60-generic 2025-10-09 10:18:18.557038 | orchestrator | 2025-10-09 10:17:59 | INFO  | Removing group storage:children from 50-kolla 2025-10-09 10:18:18.557049 | orchestrator | 2025-10-09 10:17:59 | INFO  | Removing group netbird:children from 50-infrastructure 2025-10-09 10:18:18.557060 | orchestrator | 2025-10-09 10:17:59 | INFO  | Removing group ceph-rgw from 50-ceph 2025-10-09 10:18:18.557071 | orchestrator | 2025-10-09 10:17:59 | INFO  | Removing group ceph-mds from 50-ceph 2025-10-09 10:18:18.557083 | orchestrator | 2025-10-09 10:17:59 | INFO  | Handling group overwrites in 20-roles 2025-10-09 10:18:18.557094 | orchestrator | 2025-10-09 10:17:59 | INFO  | Removing group k3s_node from 50-infrastructure 2025-10-09 10:18:18.557134 | orchestrator | 2025-10-09 10:17:59 | INFO  | Removed 6 group(s) in total 2025-10-09 10:18:18.557145 | orchestrator | 2025-10-09 10:17:59 | INFO  | Inventory overwrite handling completed 2025-10-09 10:18:18.557157 | orchestrator | 2025-10-09 10:18:00 | INFO  | Starting merge of inventory files 2025-10-09 10:18:18.557168 | orchestrator | 2025-10-09 10:18:00 | INFO  | Inventory files merged successfully 2025-10-09 10:18:18.557178 | orchestrator | 2025-10-09 10:18:04 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-10-09 10:18:18.557189 | orchestrator | 2025-10-09 10:18:17 | INFO  | Successfully wrote ClusterShell configuration 2025-10-09 10:18:18.557201 | orchestrator | [master 370e33e] 2025-10-09-10-18 2025-10-09 10:18:18.557213 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-10-09 10:18:21.126393 | orchestrator | 2025-10-09 10:18:21 | INFO  | Task 974532c5-f219-47ac-95f1-13366b40b4ca (ceph-create-lvm-devices) was prepared for execution. 2025-10-09 10:18:21.126491 | orchestrator | 2025-10-09 10:18:21 | INFO  | It takes a moment until task 974532c5-f219-47ac-95f1-13366b40b4ca (ceph-create-lvm-devices) has been started and output is visible here. 2025-10-09 10:18:34.502358 | orchestrator | 2025-10-09 10:18:34.502488 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-10-09 10:18:34.502516 | orchestrator | 2025-10-09 10:18:34.502536 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-10-09 10:18:34.503283 | orchestrator | Thursday 09 October 2025 10:18:25 +0000 (0:00:00.333) 0:00:00.333 ****** 2025-10-09 10:18:34.503354 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:18:34.503376 | orchestrator | 2025-10-09 10:18:34.503396 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-10-09 10:18:34.503414 | orchestrator | Thursday 09 October 2025 10:18:26 +0000 (0:00:00.281) 0:00:00.615 ****** 2025-10-09 10:18:34.503432 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:18:34.503444 | orchestrator | 2025-10-09 10:18:34.503455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:34.503467 | orchestrator | Thursday 09 October 2025 10:18:26 +0000 (0:00:00.236) 0:00:00.851 ****** 2025-10-09 10:18:34.503479 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-10-09 10:18:34.503492 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-10-09 10:18:34.503504 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-10-09 10:18:34.503515 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-10-09 10:18:34.503526 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-10-09 10:18:34.503537 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-10-09 10:18:34.503548 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-10-09 10:18:34.503559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-10-09 10:18:34.503570 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-10-09 10:18:34.503581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-10-09 10:18:34.503592 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-10-09 10:18:34.503604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-10-09 10:18:34.503614 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-10-09 10:18:34.503626 | orchestrator | 2025-10-09 10:18:34.503637 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:34.503671 | orchestrator | Thursday 09 October 2025 10:18:27 +0000 (0:00:00.601) 0:00:01.452 ****** 2025-10-09 10:18:34.503683 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:34.503694 | orchestrator | 2025-10-09 10:18:34.503705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:34.503716 | orchestrator | Thursday 09 October 2025 10:18:27 +0000 (0:00:00.238) 0:00:01.691 ****** 2025-10-09 10:18:34.503727 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:34.503738 | orchestrator | 2025-10-09 10:18:34.503749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:34.503760 | orchestrator | Thursday 09 October 2025 10:18:27 +0000 (0:00:00.200) 0:00:01.891 ****** 2025-10-09 10:18:34.503771 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:34.503781 | orchestrator | 2025-10-09 10:18:34.503792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:34.503803 | orchestrator | Thursday 09 October 2025 10:18:27 +0000 (0:00:00.204) 0:00:02.095 ****** 2025-10-09 10:18:34.503814 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:34.503824 | orchestrator | 2025-10-09 10:18:34.503835 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:34.503846 | orchestrator | Thursday 09 October 2025 10:18:28 +0000 (0:00:00.308) 0:00:02.403 ****** 2025-10-09 10:18:34.503857 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:34.503867 | orchestrator | 2025-10-09 10:18:34.503878 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:34.503889 | orchestrator | Thursday 09 October 2025 10:18:28 +0000 (0:00:00.278) 0:00:02.682 ****** 2025-10-09 10:18:34.503900 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:34.503910 | orchestrator | 2025-10-09 10:18:34.503921 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:34.503932 | orchestrator | Thursday 09 October 2025 10:18:28 +0000 (0:00:00.213) 0:00:02.895 ****** 2025-10-09 10:18:34.503942 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:34.503953 | orchestrator | 2025-10-09 10:18:34.503964 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:34.503974 | orchestrator | Thursday 09 October 2025 10:18:28 +0000 (0:00:00.234) 0:00:03.129 ****** 2025-10-09 10:18:34.503985 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:34.503996 | orchestrator | 2025-10-09 10:18:34.504007 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:34.504017 | orchestrator | Thursday 09 October 2025 10:18:29 +0000 (0:00:00.270) 0:00:03.400 ****** 2025-10-09 10:18:34.504028 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843) 2025-10-09 10:18:34.504040 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843) 2025-10-09 10:18:34.504051 | orchestrator | 2025-10-09 10:18:34.504062 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:34.504072 | orchestrator | Thursday 09 October 2025 10:18:29 +0000 (0:00:00.449) 0:00:03.849 ****** 2025-10-09 10:18:34.504104 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_919b2ed4-de3e-4423-bde9-ac7f73558c8d) 2025-10-09 10:18:34.504116 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_919b2ed4-de3e-4423-bde9-ac7f73558c8d) 2025-10-09 10:18:34.504127 | orchestrator | 2025-10-09 10:18:34.504138 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:34.504148 | orchestrator | Thursday 09 October 2025 10:18:30 +0000 (0:00:00.584) 0:00:04.434 ****** 2025-10-09 10:18:34.504159 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ea7d1eca-dc5e-463e-aff8-492469dc7c84) 2025-10-09 10:18:34.504170 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ea7d1eca-dc5e-463e-aff8-492469dc7c84) 2025-10-09 10:18:34.504181 | orchestrator | 2025-10-09 10:18:34.504191 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:34.504209 | orchestrator | Thursday 09 October 2025 10:18:30 +0000 (0:00:00.815) 0:00:05.249 ****** 2025-10-09 10:18:34.504220 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2df43997-ce38-41a3-953f-7189c0799c6e) 2025-10-09 10:18:34.504231 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2df43997-ce38-41a3-953f-7189c0799c6e) 2025-10-09 10:18:34.504242 | orchestrator | 2025-10-09 10:18:34.504252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:34.504263 | orchestrator | Thursday 09 October 2025 10:18:31 +0000 (0:00:00.860) 0:00:06.110 ****** 2025-10-09 10:18:34.504274 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-10-09 10:18:34.504284 | orchestrator | 2025-10-09 10:18:34.504319 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:18:34.504340 | orchestrator | Thursday 09 October 2025 10:18:32 +0000 (0:00:00.350) 0:00:06.460 ****** 2025-10-09 10:18:34.504352 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-10-09 10:18:34.504363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-10-09 10:18:34.504373 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-10-09 10:18:34.504384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-10-09 10:18:34.504413 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-10-09 10:18:34.504424 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-10-09 10:18:34.504435 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-10-09 10:18:34.504446 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-10-09 10:18:34.504456 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-10-09 10:18:34.504467 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-10-09 10:18:34.504477 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-10-09 10:18:34.504488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-10-09 10:18:34.504503 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-10-09 10:18:34.504515 | orchestrator | 2025-10-09 10:18:34.504528 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:18:34.504547 | orchestrator | Thursday 09 October 2025 10:18:32 +0000 (0:00:00.498) 0:00:06.959 ****** 2025-10-09 10:18:34.504564 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:34.504583 | orchestrator | 2025-10-09 10:18:34.504601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:18:34.504619 | orchestrator | Thursday 09 October 2025 10:18:32 +0000 (0:00:00.279) 0:00:07.238 ****** 2025-10-09 10:18:34.504639 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:34.504651 | orchestrator | 2025-10-09 10:18:34.504661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:18:34.504672 | orchestrator | Thursday 09 October 2025 10:18:33 +0000 (0:00:00.259) 0:00:07.497 ****** 2025-10-09 10:18:34.504683 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:34.504693 | orchestrator | 2025-10-09 10:18:34.504704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:18:34.504715 | orchestrator | Thursday 09 October 2025 10:18:33 +0000 (0:00:00.225) 0:00:07.722 ****** 2025-10-09 10:18:34.504726 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:34.504736 | orchestrator | 2025-10-09 10:18:34.504747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:18:34.504767 | orchestrator | Thursday 09 October 2025 10:18:33 +0000 (0:00:00.261) 0:00:07.984 ****** 2025-10-09 10:18:34.504778 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:34.504788 | orchestrator | 2025-10-09 10:18:34.504799 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:18:34.504816 | orchestrator | Thursday 09 October 2025 10:18:33 +0000 (0:00:00.216) 0:00:08.201 ****** 2025-10-09 10:18:34.504835 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:34.504854 | orchestrator | 2025-10-09 10:18:34.504872 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:18:34.504890 | orchestrator | Thursday 09 October 2025 10:18:34 +0000 (0:00:00.227) 0:00:08.429 ****** 2025-10-09 10:18:34.504910 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:34.504922 | orchestrator | 2025-10-09 10:18:34.504933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:18:34.504943 | orchestrator | Thursday 09 October 2025 10:18:34 +0000 (0:00:00.205) 0:00:08.634 ****** 2025-10-09 10:18:34.504964 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:42.899198 | orchestrator | 2025-10-09 10:18:42.899353 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:18:42.899371 | orchestrator | Thursday 09 October 2025 10:18:34 +0000 (0:00:00.209) 0:00:08.844 ****** 2025-10-09 10:18:42.899383 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-10-09 10:18:42.899394 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-10-09 10:18:42.899404 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-10-09 10:18:42.899414 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-10-09 10:18:42.899424 | orchestrator | 2025-10-09 10:18:42.899434 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:18:42.899444 | orchestrator | Thursday 09 October 2025 10:18:35 +0000 (0:00:01.151) 0:00:09.995 ****** 2025-10-09 10:18:42.899454 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:42.899464 | orchestrator | 2025-10-09 10:18:42.899474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:18:42.899483 | orchestrator | Thursday 09 October 2025 10:18:35 +0000 (0:00:00.213) 0:00:10.209 ****** 2025-10-09 10:18:42.899493 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:42.899503 | orchestrator | 2025-10-09 10:18:42.899513 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:18:42.899522 | orchestrator | Thursday 09 October 2025 10:18:36 +0000 (0:00:00.224) 0:00:10.434 ****** 2025-10-09 10:18:42.899532 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:42.899542 | orchestrator | 2025-10-09 10:18:42.899552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:18:42.899562 | orchestrator | Thursday 09 October 2025 10:18:36 +0000 (0:00:00.228) 0:00:10.663 ****** 2025-10-09 10:18:42.899571 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:42.899581 | orchestrator | 2025-10-09 10:18:42.899591 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-10-09 10:18:42.899600 | orchestrator | Thursday 09 October 2025 10:18:36 +0000 (0:00:00.220) 0:00:10.884 ****** 2025-10-09 10:18:42.899610 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:42.899620 | orchestrator | 2025-10-09 10:18:42.899629 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-10-09 10:18:42.899639 | orchestrator | Thursday 09 October 2025 10:18:36 +0000 (0:00:00.159) 0:00:11.043 ****** 2025-10-09 10:18:42.899649 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0cbdaba5-e3a8-55ff-9207-33249002ea74'}}) 2025-10-09 10:18:42.899660 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0b8397ec-b473-5fab-a988-270c3fd4ebb0'}}) 2025-10-09 10:18:42.899669 | orchestrator | 2025-10-09 10:18:42.899679 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-10-09 10:18:42.899689 | orchestrator | Thursday 09 October 2025 10:18:36 +0000 (0:00:00.223) 0:00:11.267 ****** 2025-10-09 10:18:42.899700 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'}) 2025-10-09 10:18:42.899732 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'}) 2025-10-09 10:18:42.899744 | orchestrator | 2025-10-09 10:18:42.899755 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-10-09 10:18:42.899766 | orchestrator | Thursday 09 October 2025 10:18:38 +0000 (0:00:02.023) 0:00:13.290 ****** 2025-10-09 10:18:42.899778 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'})  2025-10-09 10:18:42.899790 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'})  2025-10-09 10:18:42.899802 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:42.899813 | orchestrator | 2025-10-09 10:18:42.899824 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-10-09 10:18:42.899835 | orchestrator | Thursday 09 October 2025 10:18:39 +0000 (0:00:00.168) 0:00:13.459 ****** 2025-10-09 10:18:42.899845 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'}) 2025-10-09 10:18:42.899857 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'}) 2025-10-09 10:18:42.899868 | orchestrator | 2025-10-09 10:18:42.899879 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-10-09 10:18:42.899890 | orchestrator | Thursday 09 October 2025 10:18:40 +0000 (0:00:01.447) 0:00:14.906 ****** 2025-10-09 10:18:42.899901 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'})  2025-10-09 10:18:42.899913 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'})  2025-10-09 10:18:42.899924 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:42.899935 | orchestrator | 2025-10-09 10:18:42.899946 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-10-09 10:18:42.899957 | orchestrator | Thursday 09 October 2025 10:18:40 +0000 (0:00:00.154) 0:00:15.060 ****** 2025-10-09 10:18:42.899969 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:42.899980 | orchestrator | 2025-10-09 10:18:42.899991 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-10-09 10:18:42.900017 | orchestrator | Thursday 09 October 2025 10:18:40 +0000 (0:00:00.126) 0:00:15.186 ****** 2025-10-09 10:18:42.900029 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'})  2025-10-09 10:18:42.900040 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'})  2025-10-09 10:18:42.900051 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:42.900062 | orchestrator | 2025-10-09 10:18:42.900074 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-10-09 10:18:42.900084 | orchestrator | Thursday 09 October 2025 10:18:41 +0000 (0:00:00.384) 0:00:15.571 ****** 2025-10-09 10:18:42.900096 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:42.900106 | orchestrator | 2025-10-09 10:18:42.900115 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-10-09 10:18:42.900125 | orchestrator | Thursday 09 October 2025 10:18:41 +0000 (0:00:00.142) 0:00:15.713 ****** 2025-10-09 10:18:42.900135 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'})  2025-10-09 10:18:42.900153 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'})  2025-10-09 10:18:42.900163 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:42.900173 | orchestrator | 2025-10-09 10:18:42.900182 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-10-09 10:18:42.900192 | orchestrator | Thursday 09 October 2025 10:18:41 +0000 (0:00:00.176) 0:00:15.890 ****** 2025-10-09 10:18:42.900202 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:42.900211 | orchestrator | 2025-10-09 10:18:42.900221 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-10-09 10:18:42.900231 | orchestrator | Thursday 09 October 2025 10:18:41 +0000 (0:00:00.160) 0:00:16.050 ****** 2025-10-09 10:18:42.900241 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'})  2025-10-09 10:18:42.900251 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'})  2025-10-09 10:18:42.900260 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:42.900270 | orchestrator | 2025-10-09 10:18:42.900280 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-10-09 10:18:42.900290 | orchestrator | Thursday 09 October 2025 10:18:41 +0000 (0:00:00.180) 0:00:16.231 ****** 2025-10-09 10:18:42.900320 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:18:42.900330 | orchestrator | 2025-10-09 10:18:42.900340 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-10-09 10:18:42.900350 | orchestrator | Thursday 09 October 2025 10:18:42 +0000 (0:00:00.169) 0:00:16.400 ****** 2025-10-09 10:18:42.900379 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'})  2025-10-09 10:18:42.900389 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'})  2025-10-09 10:18:42.900399 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:42.900409 | orchestrator | 2025-10-09 10:18:42.900419 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-10-09 10:18:42.900428 | orchestrator | Thursday 09 October 2025 10:18:42 +0000 (0:00:00.180) 0:00:16.581 ****** 2025-10-09 10:18:42.900438 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'})  2025-10-09 10:18:42.900448 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'})  2025-10-09 10:18:42.900458 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:42.900467 | orchestrator | 2025-10-09 10:18:42.900477 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-10-09 10:18:42.900487 | orchestrator | Thursday 09 October 2025 10:18:42 +0000 (0:00:00.199) 0:00:16.781 ****** 2025-10-09 10:18:42.900496 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'})  2025-10-09 10:18:42.900506 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'})  2025-10-09 10:18:42.900516 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:42.900526 | orchestrator | 2025-10-09 10:18:42.900535 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-10-09 10:18:42.900545 | orchestrator | Thursday 09 October 2025 10:18:42 +0000 (0:00:00.169) 0:00:16.950 ****** 2025-10-09 10:18:42.900555 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:42.900571 | orchestrator | 2025-10-09 10:18:42.900581 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-10-09 10:18:42.900591 | orchestrator | Thursday 09 October 2025 10:18:42 +0000 (0:00:00.146) 0:00:17.097 ****** 2025-10-09 10:18:42.900601 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:42.900610 | orchestrator | 2025-10-09 10:18:42.900626 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-10-09 10:18:49.675607 | orchestrator | Thursday 09 October 2025 10:18:42 +0000 (0:00:00.145) 0:00:17.242 ****** 2025-10-09 10:18:49.675708 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.675722 | orchestrator | 2025-10-09 10:18:49.675733 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-10-09 10:18:49.675743 | orchestrator | Thursday 09 October 2025 10:18:43 +0000 (0:00:00.173) 0:00:17.416 ****** 2025-10-09 10:18:49.675754 | orchestrator | ok: [testbed-node-3] => { 2025-10-09 10:18:49.675764 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-10-09 10:18:49.675775 | orchestrator | } 2025-10-09 10:18:49.675785 | orchestrator | 2025-10-09 10:18:49.675796 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-10-09 10:18:49.675806 | orchestrator | Thursday 09 October 2025 10:18:43 +0000 (0:00:00.360) 0:00:17.776 ****** 2025-10-09 10:18:49.675816 | orchestrator | ok: [testbed-node-3] => { 2025-10-09 10:18:49.675826 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-10-09 10:18:49.675836 | orchestrator | } 2025-10-09 10:18:49.675846 | orchestrator | 2025-10-09 10:18:49.675855 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-10-09 10:18:49.675865 | orchestrator | Thursday 09 October 2025 10:18:43 +0000 (0:00:00.129) 0:00:17.906 ****** 2025-10-09 10:18:49.675876 | orchestrator | ok: [testbed-node-3] => { 2025-10-09 10:18:49.675885 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-10-09 10:18:49.675895 | orchestrator | } 2025-10-09 10:18:49.675905 | orchestrator | 2025-10-09 10:18:49.675915 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-10-09 10:18:49.675925 | orchestrator | Thursday 09 October 2025 10:18:43 +0000 (0:00:00.144) 0:00:18.051 ****** 2025-10-09 10:18:49.675935 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:18:49.675944 | orchestrator | 2025-10-09 10:18:49.675954 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-10-09 10:18:49.675964 | orchestrator | Thursday 09 October 2025 10:18:44 +0000 (0:00:00.685) 0:00:18.737 ****** 2025-10-09 10:18:49.675974 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:18:49.675983 | orchestrator | 2025-10-09 10:18:49.675993 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-10-09 10:18:49.676003 | orchestrator | Thursday 09 October 2025 10:18:44 +0000 (0:00:00.522) 0:00:19.259 ****** 2025-10-09 10:18:49.676012 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:18:49.676022 | orchestrator | 2025-10-09 10:18:49.676031 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-10-09 10:18:49.676041 | orchestrator | Thursday 09 October 2025 10:18:45 +0000 (0:00:00.529) 0:00:19.789 ****** 2025-10-09 10:18:49.676051 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:18:49.676061 | orchestrator | 2025-10-09 10:18:49.676070 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-10-09 10:18:49.676080 | orchestrator | Thursday 09 October 2025 10:18:45 +0000 (0:00:00.154) 0:00:19.943 ****** 2025-10-09 10:18:49.676090 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.676100 | orchestrator | 2025-10-09 10:18:49.676109 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-10-09 10:18:49.676119 | orchestrator | Thursday 09 October 2025 10:18:45 +0000 (0:00:00.113) 0:00:20.056 ****** 2025-10-09 10:18:49.676129 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.676138 | orchestrator | 2025-10-09 10:18:49.676150 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-10-09 10:18:49.676162 | orchestrator | Thursday 09 October 2025 10:18:45 +0000 (0:00:00.122) 0:00:20.179 ****** 2025-10-09 10:18:49.676194 | orchestrator | ok: [testbed-node-3] => { 2025-10-09 10:18:49.676206 | orchestrator |  "vgs_report": { 2025-10-09 10:18:49.676230 | orchestrator |  "vg": [] 2025-10-09 10:18:49.676242 | orchestrator |  } 2025-10-09 10:18:49.676253 | orchestrator | } 2025-10-09 10:18:49.676264 | orchestrator | 2025-10-09 10:18:49.676275 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-10-09 10:18:49.676287 | orchestrator | Thursday 09 October 2025 10:18:45 +0000 (0:00:00.142) 0:00:20.322 ****** 2025-10-09 10:18:49.676321 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.676331 | orchestrator | 2025-10-09 10:18:49.676342 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-10-09 10:18:49.676353 | orchestrator | Thursday 09 October 2025 10:18:46 +0000 (0:00:00.172) 0:00:20.494 ****** 2025-10-09 10:18:49.676364 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.676374 | orchestrator | 2025-10-09 10:18:49.676385 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-10-09 10:18:49.676396 | orchestrator | Thursday 09 October 2025 10:18:46 +0000 (0:00:00.144) 0:00:20.638 ****** 2025-10-09 10:18:49.676407 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.676418 | orchestrator | 2025-10-09 10:18:49.676429 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-10-09 10:18:49.676440 | orchestrator | Thursday 09 October 2025 10:18:46 +0000 (0:00:00.381) 0:00:21.020 ****** 2025-10-09 10:18:49.676451 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.676461 | orchestrator | 2025-10-09 10:18:49.676472 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-10-09 10:18:49.676483 | orchestrator | Thursday 09 October 2025 10:18:46 +0000 (0:00:00.205) 0:00:21.225 ****** 2025-10-09 10:18:49.676494 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.676505 | orchestrator | 2025-10-09 10:18:49.676515 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-10-09 10:18:49.676525 | orchestrator | Thursday 09 October 2025 10:18:47 +0000 (0:00:00.177) 0:00:21.403 ****** 2025-10-09 10:18:49.676534 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.676544 | orchestrator | 2025-10-09 10:18:49.676553 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-10-09 10:18:49.676563 | orchestrator | Thursday 09 October 2025 10:18:47 +0000 (0:00:00.164) 0:00:21.567 ****** 2025-10-09 10:18:49.676572 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.676582 | orchestrator | 2025-10-09 10:18:49.676591 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-10-09 10:18:49.676601 | orchestrator | Thursday 09 October 2025 10:18:47 +0000 (0:00:00.151) 0:00:21.719 ****** 2025-10-09 10:18:49.676611 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.676620 | orchestrator | 2025-10-09 10:18:49.676630 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-10-09 10:18:49.676654 | orchestrator | Thursday 09 October 2025 10:18:47 +0000 (0:00:00.143) 0:00:21.862 ****** 2025-10-09 10:18:49.676664 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.676674 | orchestrator | 2025-10-09 10:18:49.676684 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-10-09 10:18:49.676693 | orchestrator | Thursday 09 October 2025 10:18:47 +0000 (0:00:00.137) 0:00:22.000 ****** 2025-10-09 10:18:49.676703 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.676712 | orchestrator | 2025-10-09 10:18:49.676722 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-10-09 10:18:49.676731 | orchestrator | Thursday 09 October 2025 10:18:47 +0000 (0:00:00.134) 0:00:22.135 ****** 2025-10-09 10:18:49.676741 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.676751 | orchestrator | 2025-10-09 10:18:49.676760 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-10-09 10:18:49.676770 | orchestrator | Thursday 09 October 2025 10:18:47 +0000 (0:00:00.119) 0:00:22.254 ****** 2025-10-09 10:18:49.676780 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.676789 | orchestrator | 2025-10-09 10:18:49.676806 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-10-09 10:18:49.676815 | orchestrator | Thursday 09 October 2025 10:18:48 +0000 (0:00:00.157) 0:00:22.411 ****** 2025-10-09 10:18:49.676825 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.676834 | orchestrator | 2025-10-09 10:18:49.676844 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-10-09 10:18:49.676854 | orchestrator | Thursday 09 October 2025 10:18:48 +0000 (0:00:00.143) 0:00:22.555 ****** 2025-10-09 10:18:49.676863 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.676873 | orchestrator | 2025-10-09 10:18:49.676882 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-10-09 10:18:49.676892 | orchestrator | Thursday 09 October 2025 10:18:48 +0000 (0:00:00.148) 0:00:22.703 ****** 2025-10-09 10:18:49.676903 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'})  2025-10-09 10:18:49.676914 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'})  2025-10-09 10:18:49.676924 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.676934 | orchestrator | 2025-10-09 10:18:49.676943 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-10-09 10:18:49.676953 | orchestrator | Thursday 09 October 2025 10:18:48 +0000 (0:00:00.401) 0:00:23.105 ****** 2025-10-09 10:18:49.676963 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'})  2025-10-09 10:18:49.676973 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'})  2025-10-09 10:18:49.676982 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.676992 | orchestrator | 2025-10-09 10:18:49.677001 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-10-09 10:18:49.677011 | orchestrator | Thursday 09 October 2025 10:18:48 +0000 (0:00:00.191) 0:00:23.297 ****** 2025-10-09 10:18:49.677021 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'})  2025-10-09 10:18:49.677031 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'})  2025-10-09 10:18:49.677041 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.677051 | orchestrator | 2025-10-09 10:18:49.677060 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-10-09 10:18:49.677070 | orchestrator | Thursday 09 October 2025 10:18:49 +0000 (0:00:00.212) 0:00:23.509 ****** 2025-10-09 10:18:49.677080 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'})  2025-10-09 10:18:49.677089 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'})  2025-10-09 10:18:49.677099 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.677108 | orchestrator | 2025-10-09 10:18:49.677118 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-10-09 10:18:49.677128 | orchestrator | Thursday 09 October 2025 10:18:49 +0000 (0:00:00.176) 0:00:23.685 ****** 2025-10-09 10:18:49.677137 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'})  2025-10-09 10:18:49.677147 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'})  2025-10-09 10:18:49.677156 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:49.677172 | orchestrator | 2025-10-09 10:18:49.677182 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-10-09 10:18:49.677191 | orchestrator | Thursday 09 October 2025 10:18:49 +0000 (0:00:00.172) 0:00:23.858 ****** 2025-10-09 10:18:49.677208 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'})  2025-10-09 10:18:49.677224 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'})  2025-10-09 10:18:55.469576 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:55.469687 | orchestrator | 2025-10-09 10:18:55.469700 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-10-09 10:18:55.469711 | orchestrator | Thursday 09 October 2025 10:18:49 +0000 (0:00:00.156) 0:00:24.014 ****** 2025-10-09 10:18:55.469719 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'})  2025-10-09 10:18:55.469729 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'})  2025-10-09 10:18:55.469785 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:55.469795 | orchestrator | 2025-10-09 10:18:55.469804 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-10-09 10:18:55.469813 | orchestrator | Thursday 09 October 2025 10:18:49 +0000 (0:00:00.217) 0:00:24.232 ****** 2025-10-09 10:18:55.469822 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'})  2025-10-09 10:18:55.469830 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'})  2025-10-09 10:18:55.469838 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:55.469847 | orchestrator | 2025-10-09 10:18:55.469855 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-10-09 10:18:55.469864 | orchestrator | Thursday 09 October 2025 10:18:50 +0000 (0:00:00.173) 0:00:24.405 ****** 2025-10-09 10:18:55.469872 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:18:55.469881 | orchestrator | 2025-10-09 10:18:55.469889 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-10-09 10:18:55.469897 | orchestrator | Thursday 09 October 2025 10:18:50 +0000 (0:00:00.523) 0:00:24.928 ****** 2025-10-09 10:18:55.469905 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:18:55.469913 | orchestrator | 2025-10-09 10:18:55.469922 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-10-09 10:18:55.469930 | orchestrator | Thursday 09 October 2025 10:18:51 +0000 (0:00:00.528) 0:00:25.457 ****** 2025-10-09 10:18:55.469938 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:18:55.469946 | orchestrator | 2025-10-09 10:18:55.469954 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-10-09 10:18:55.469962 | orchestrator | Thursday 09 October 2025 10:18:51 +0000 (0:00:00.132) 0:00:25.589 ****** 2025-10-09 10:18:55.469970 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'vg_name': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'}) 2025-10-09 10:18:55.469980 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'vg_name': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'}) 2025-10-09 10:18:55.469988 | orchestrator | 2025-10-09 10:18:55.470011 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-10-09 10:18:55.470056 | orchestrator | Thursday 09 October 2025 10:18:51 +0000 (0:00:00.172) 0:00:25.762 ****** 2025-10-09 10:18:55.470064 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'})  2025-10-09 10:18:55.470089 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'})  2025-10-09 10:18:55.470097 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:55.470105 | orchestrator | 2025-10-09 10:18:55.470113 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-10-09 10:18:55.470122 | orchestrator | Thursday 09 October 2025 10:18:51 +0000 (0:00:00.396) 0:00:26.159 ****** 2025-10-09 10:18:55.470131 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'})  2025-10-09 10:18:55.470140 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'})  2025-10-09 10:18:55.470149 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:55.470158 | orchestrator | 2025-10-09 10:18:55.470166 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-10-09 10:18:55.470176 | orchestrator | Thursday 09 October 2025 10:18:51 +0000 (0:00:00.152) 0:00:26.312 ****** 2025-10-09 10:18:55.470185 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'})  2025-10-09 10:18:55.470194 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'})  2025-10-09 10:18:55.470203 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:18:55.470212 | orchestrator | 2025-10-09 10:18:55.470221 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-10-09 10:18:55.470230 | orchestrator | Thursday 09 October 2025 10:18:52 +0000 (0:00:00.176) 0:00:26.488 ****** 2025-10-09 10:18:55.470239 | orchestrator | ok: [testbed-node-3] => { 2025-10-09 10:18:55.470248 | orchestrator |  "lvm_report": { 2025-10-09 10:18:55.470257 | orchestrator |  "lv": [ 2025-10-09 10:18:55.470266 | orchestrator |  { 2025-10-09 10:18:55.470317 | orchestrator |  "lv_name": "osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0", 2025-10-09 10:18:55.470328 | orchestrator |  "vg_name": "ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0" 2025-10-09 10:18:55.470338 | orchestrator |  }, 2025-10-09 10:18:55.470347 | orchestrator |  { 2025-10-09 10:18:55.470356 | orchestrator |  "lv_name": "osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74", 2025-10-09 10:18:55.470365 | orchestrator |  "vg_name": "ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74" 2025-10-09 10:18:55.470374 | orchestrator |  } 2025-10-09 10:18:55.470384 | orchestrator |  ], 2025-10-09 10:18:55.470393 | orchestrator |  "pv": [ 2025-10-09 10:18:55.470402 | orchestrator |  { 2025-10-09 10:18:55.470411 | orchestrator |  "pv_name": "/dev/sdb", 2025-10-09 10:18:55.470420 | orchestrator |  "vg_name": "ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74" 2025-10-09 10:18:55.470430 | orchestrator |  }, 2025-10-09 10:18:55.470438 | orchestrator |  { 2025-10-09 10:18:55.470447 | orchestrator |  "pv_name": "/dev/sdc", 2025-10-09 10:18:55.470456 | orchestrator |  "vg_name": "ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0" 2025-10-09 10:18:55.470465 | orchestrator |  } 2025-10-09 10:18:55.470474 | orchestrator |  ] 2025-10-09 10:18:55.470483 | orchestrator |  } 2025-10-09 10:18:55.470491 | orchestrator | } 2025-10-09 10:18:55.470499 | orchestrator | 2025-10-09 10:18:55.470507 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-10-09 10:18:55.470515 | orchestrator | 2025-10-09 10:18:55.470523 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-10-09 10:18:55.470531 | orchestrator | Thursday 09 October 2025 10:18:52 +0000 (0:00:00.318) 0:00:26.806 ****** 2025-10-09 10:18:55.470540 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-10-09 10:18:55.470554 | orchestrator | 2025-10-09 10:18:55.470562 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-10-09 10:18:55.470571 | orchestrator | Thursday 09 October 2025 10:18:52 +0000 (0:00:00.266) 0:00:27.073 ****** 2025-10-09 10:18:55.470579 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:18:55.470587 | orchestrator | 2025-10-09 10:18:55.470595 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:55.470603 | orchestrator | Thursday 09 October 2025 10:18:52 +0000 (0:00:00.232) 0:00:27.305 ****** 2025-10-09 10:18:55.470611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-10-09 10:18:55.470619 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-10-09 10:18:55.470627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-10-09 10:18:55.470635 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-10-09 10:18:55.470643 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-10-09 10:18:55.470651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-10-09 10:18:55.470659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-10-09 10:18:55.470671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-10-09 10:18:55.470680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-10-09 10:18:55.470688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-10-09 10:18:55.470696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-10-09 10:18:55.470704 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-10-09 10:18:55.470712 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-10-09 10:18:55.470720 | orchestrator | 2025-10-09 10:18:55.470728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:55.470736 | orchestrator | Thursday 09 October 2025 10:18:53 +0000 (0:00:00.468) 0:00:27.774 ****** 2025-10-09 10:18:55.470743 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:18:55.470751 | orchestrator | 2025-10-09 10:18:55.470759 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:55.470767 | orchestrator | Thursday 09 October 2025 10:18:53 +0000 (0:00:00.242) 0:00:28.016 ****** 2025-10-09 10:18:55.470775 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:18:55.470783 | orchestrator | 2025-10-09 10:18:55.470791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:55.470799 | orchestrator | Thursday 09 October 2025 10:18:53 +0000 (0:00:00.221) 0:00:28.238 ****** 2025-10-09 10:18:55.470807 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:18:55.470815 | orchestrator | 2025-10-09 10:18:55.470823 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:55.470831 | orchestrator | Thursday 09 October 2025 10:18:54 +0000 (0:00:00.657) 0:00:28.895 ****** 2025-10-09 10:18:55.470839 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:18:55.470847 | orchestrator | 2025-10-09 10:18:55.470855 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:55.470863 | orchestrator | Thursday 09 October 2025 10:18:54 +0000 (0:00:00.222) 0:00:29.118 ****** 2025-10-09 10:18:55.470870 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:18:55.470878 | orchestrator | 2025-10-09 10:18:55.470886 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:55.470894 | orchestrator | Thursday 09 October 2025 10:18:55 +0000 (0:00:00.237) 0:00:29.355 ****** 2025-10-09 10:18:55.470902 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:18:55.470910 | orchestrator | 2025-10-09 10:18:55.470923 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:18:55.470931 | orchestrator | Thursday 09 October 2025 10:18:55 +0000 (0:00:00.203) 0:00:29.559 ****** 2025-10-09 10:18:55.470939 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:18:55.470947 | orchestrator | 2025-10-09 10:18:55.470960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:19:06.902269 | orchestrator | Thursday 09 October 2025 10:18:55 +0000 (0:00:00.250) 0:00:29.810 ****** 2025-10-09 10:19:06.902425 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:06.902457 | orchestrator | 2025-10-09 10:19:06.902477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:19:06.902506 | orchestrator | Thursday 09 October 2025 10:18:55 +0000 (0:00:00.211) 0:00:30.021 ****** 2025-10-09 10:19:06.902526 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6) 2025-10-09 10:19:06.902546 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6) 2025-10-09 10:19:06.902563 | orchestrator | 2025-10-09 10:19:06.902582 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:19:06.902601 | orchestrator | Thursday 09 October 2025 10:18:56 +0000 (0:00:00.461) 0:00:30.482 ****** 2025-10-09 10:19:06.902619 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9e7febf8-8ec8-4679-b2bb-f3ad59f2c20d) 2025-10-09 10:19:06.902639 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9e7febf8-8ec8-4679-b2bb-f3ad59f2c20d) 2025-10-09 10:19:06.902658 | orchestrator | 2025-10-09 10:19:06.902677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:19:06.902692 | orchestrator | Thursday 09 October 2025 10:18:56 +0000 (0:00:00.435) 0:00:30.917 ****** 2025-10-09 10:19:06.902703 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fd778c69-d4e8-41af-bc93-131a1dca1168) 2025-10-09 10:19:06.902714 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fd778c69-d4e8-41af-bc93-131a1dca1168) 2025-10-09 10:19:06.902725 | orchestrator | 2025-10-09 10:19:06.902735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:19:06.902746 | orchestrator | Thursday 09 October 2025 10:18:57 +0000 (0:00:00.451) 0:00:31.369 ****** 2025-10-09 10:19:06.902757 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_96a31b72-79c3-475c-a7fa-14d6a4c6c9b3) 2025-10-09 10:19:06.902771 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_96a31b72-79c3-475c-a7fa-14d6a4c6c9b3) 2025-10-09 10:19:06.902783 | orchestrator | 2025-10-09 10:19:06.902796 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:19:06.902809 | orchestrator | Thursday 09 October 2025 10:18:57 +0000 (0:00:00.725) 0:00:32.095 ****** 2025-10-09 10:19:06.902821 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-10-09 10:19:06.902833 | orchestrator | 2025-10-09 10:19:06.902846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:06.902858 | orchestrator | Thursday 09 October 2025 10:18:58 +0000 (0:00:00.612) 0:00:32.707 ****** 2025-10-09 10:19:06.902871 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-10-09 10:19:06.902884 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-10-09 10:19:06.902897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-10-09 10:19:06.902910 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-10-09 10:19:06.902923 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-10-09 10:19:06.902936 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-10-09 10:19:06.902967 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-10-09 10:19:06.903022 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-10-09 10:19:06.903043 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-10-09 10:19:06.903063 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-10-09 10:19:06.903083 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-10-09 10:19:06.903102 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-10-09 10:19:06.903118 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-10-09 10:19:06.903131 | orchestrator | 2025-10-09 10:19:06.903144 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:06.903155 | orchestrator | Thursday 09 October 2025 10:18:59 +0000 (0:00:00.681) 0:00:33.389 ****** 2025-10-09 10:19:06.903166 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:06.903177 | orchestrator | 2025-10-09 10:19:06.903189 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:06.903200 | orchestrator | Thursday 09 October 2025 10:18:59 +0000 (0:00:00.209) 0:00:33.599 ****** 2025-10-09 10:19:06.903211 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:06.903223 | orchestrator | 2025-10-09 10:19:06.903235 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:06.903245 | orchestrator | Thursday 09 October 2025 10:18:59 +0000 (0:00:00.223) 0:00:33.822 ****** 2025-10-09 10:19:06.903256 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:06.903267 | orchestrator | 2025-10-09 10:19:06.903278 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:06.903313 | orchestrator | Thursday 09 October 2025 10:18:59 +0000 (0:00:00.227) 0:00:34.050 ****** 2025-10-09 10:19:06.903325 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:06.903337 | orchestrator | 2025-10-09 10:19:06.903368 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:06.903380 | orchestrator | Thursday 09 October 2025 10:18:59 +0000 (0:00:00.209) 0:00:34.260 ****** 2025-10-09 10:19:06.903391 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:06.903402 | orchestrator | 2025-10-09 10:19:06.903413 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:06.903425 | orchestrator | Thursday 09 October 2025 10:19:00 +0000 (0:00:00.231) 0:00:34.491 ****** 2025-10-09 10:19:06.903436 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:06.903447 | orchestrator | 2025-10-09 10:19:06.903458 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:06.903469 | orchestrator | Thursday 09 October 2025 10:19:00 +0000 (0:00:00.228) 0:00:34.720 ****** 2025-10-09 10:19:06.903479 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:06.903490 | orchestrator | 2025-10-09 10:19:06.903501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:06.903512 | orchestrator | Thursday 09 October 2025 10:19:00 +0000 (0:00:00.205) 0:00:34.925 ****** 2025-10-09 10:19:06.903523 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:06.903534 | orchestrator | 2025-10-09 10:19:06.903545 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:06.903556 | orchestrator | Thursday 09 October 2025 10:19:00 +0000 (0:00:00.255) 0:00:35.181 ****** 2025-10-09 10:19:06.903567 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-10-09 10:19:06.903578 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-10-09 10:19:06.903589 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-10-09 10:19:06.903600 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-10-09 10:19:06.903610 | orchestrator | 2025-10-09 10:19:06.903622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:06.903633 | orchestrator | Thursday 09 October 2025 10:19:01 +0000 (0:00:00.938) 0:00:36.119 ****** 2025-10-09 10:19:06.903653 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:06.903664 | orchestrator | 2025-10-09 10:19:06.903675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:06.903686 | orchestrator | Thursday 09 October 2025 10:19:01 +0000 (0:00:00.217) 0:00:36.337 ****** 2025-10-09 10:19:06.903697 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:06.903708 | orchestrator | 2025-10-09 10:19:06.903719 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:06.903730 | orchestrator | Thursday 09 October 2025 10:19:02 +0000 (0:00:00.734) 0:00:37.072 ****** 2025-10-09 10:19:06.903741 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:06.903752 | orchestrator | 2025-10-09 10:19:06.903762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:06.903773 | orchestrator | Thursday 09 October 2025 10:19:02 +0000 (0:00:00.218) 0:00:37.290 ****** 2025-10-09 10:19:06.903784 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:06.903795 | orchestrator | 2025-10-09 10:19:06.903807 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-10-09 10:19:06.903817 | orchestrator | Thursday 09 October 2025 10:19:03 +0000 (0:00:00.213) 0:00:37.504 ****** 2025-10-09 10:19:06.903835 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:06.903846 | orchestrator | 2025-10-09 10:19:06.903857 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-10-09 10:19:06.903868 | orchestrator | Thursday 09 October 2025 10:19:03 +0000 (0:00:00.164) 0:00:37.669 ****** 2025-10-09 10:19:06.903879 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'}}) 2025-10-09 10:19:06.903890 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'db411f8a-05b0-54f7-b748-fd517a3c676f'}}) 2025-10-09 10:19:06.903902 | orchestrator | 2025-10-09 10:19:06.903913 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-10-09 10:19:06.903924 | orchestrator | Thursday 09 October 2025 10:19:03 +0000 (0:00:00.190) 0:00:37.859 ****** 2025-10-09 10:19:06.903936 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'}) 2025-10-09 10:19:06.903948 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'}) 2025-10-09 10:19:06.903959 | orchestrator | 2025-10-09 10:19:06.903971 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-10-09 10:19:06.903982 | orchestrator | Thursday 09 October 2025 10:19:05 +0000 (0:00:01.854) 0:00:39.713 ****** 2025-10-09 10:19:06.903993 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'})  2025-10-09 10:19:06.904005 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'})  2025-10-09 10:19:06.904016 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:06.904027 | orchestrator | 2025-10-09 10:19:06.904038 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-10-09 10:19:06.904049 | orchestrator | Thursday 09 October 2025 10:19:05 +0000 (0:00:00.177) 0:00:39.891 ****** 2025-10-09 10:19:06.904060 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'}) 2025-10-09 10:19:06.904071 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'}) 2025-10-09 10:19:06.904082 | orchestrator | 2025-10-09 10:19:06.904100 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-10-09 10:19:12.987805 | orchestrator | Thursday 09 October 2025 10:19:06 +0000 (0:00:01.348) 0:00:41.240 ****** 2025-10-09 10:19:12.987920 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'})  2025-10-09 10:19:12.987938 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'})  2025-10-09 10:19:12.987950 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:12.987962 | orchestrator | 2025-10-09 10:19:12.987974 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-10-09 10:19:12.987985 | orchestrator | Thursday 09 October 2025 10:19:07 +0000 (0:00:00.152) 0:00:41.392 ****** 2025-10-09 10:19:12.987996 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:12.988007 | orchestrator | 2025-10-09 10:19:12.988018 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-10-09 10:19:12.988029 | orchestrator | Thursday 09 October 2025 10:19:07 +0000 (0:00:00.158) 0:00:41.550 ****** 2025-10-09 10:19:12.988040 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'})  2025-10-09 10:19:12.988051 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'})  2025-10-09 10:19:12.988063 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:12.988073 | orchestrator | 2025-10-09 10:19:12.988084 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-10-09 10:19:12.988095 | orchestrator | Thursday 09 October 2025 10:19:07 +0000 (0:00:00.159) 0:00:41.710 ****** 2025-10-09 10:19:12.988106 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:12.988117 | orchestrator | 2025-10-09 10:19:12.988128 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-10-09 10:19:12.988139 | orchestrator | Thursday 09 October 2025 10:19:07 +0000 (0:00:00.145) 0:00:41.855 ****** 2025-10-09 10:19:12.988150 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'})  2025-10-09 10:19:12.988161 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'})  2025-10-09 10:19:12.988172 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:12.988183 | orchestrator | 2025-10-09 10:19:12.988194 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-10-09 10:19:12.988205 | orchestrator | Thursday 09 October 2025 10:19:07 +0000 (0:00:00.434) 0:00:42.290 ****** 2025-10-09 10:19:12.988230 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:12.988242 | orchestrator | 2025-10-09 10:19:12.988253 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-10-09 10:19:12.988264 | orchestrator | Thursday 09 October 2025 10:19:08 +0000 (0:00:00.156) 0:00:42.447 ****** 2025-10-09 10:19:12.988275 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'})  2025-10-09 10:19:12.988286 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'})  2025-10-09 10:19:12.988334 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:12.988347 | orchestrator | 2025-10-09 10:19:12.988359 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-10-09 10:19:12.988371 | orchestrator | Thursday 09 October 2025 10:19:08 +0000 (0:00:00.170) 0:00:42.617 ****** 2025-10-09 10:19:12.988383 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:19:12.988397 | orchestrator | 2025-10-09 10:19:12.988409 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-10-09 10:19:12.988421 | orchestrator | Thursday 09 October 2025 10:19:08 +0000 (0:00:00.157) 0:00:42.775 ****** 2025-10-09 10:19:12.988444 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'})  2025-10-09 10:19:12.988458 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'})  2025-10-09 10:19:12.988470 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:12.988483 | orchestrator | 2025-10-09 10:19:12.988495 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-10-09 10:19:12.988508 | orchestrator | Thursday 09 October 2025 10:19:08 +0000 (0:00:00.149) 0:00:42.924 ****** 2025-10-09 10:19:12.988520 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'})  2025-10-09 10:19:12.988532 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'})  2025-10-09 10:19:12.988544 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:12.988556 | orchestrator | 2025-10-09 10:19:12.988569 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-10-09 10:19:12.988582 | orchestrator | Thursday 09 October 2025 10:19:08 +0000 (0:00:00.169) 0:00:43.094 ****** 2025-10-09 10:19:12.988612 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'})  2025-10-09 10:19:12.988625 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'})  2025-10-09 10:19:12.988638 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:12.988650 | orchestrator | 2025-10-09 10:19:12.988662 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-10-09 10:19:12.988675 | orchestrator | Thursday 09 October 2025 10:19:08 +0000 (0:00:00.164) 0:00:43.258 ****** 2025-10-09 10:19:12.988688 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:12.988699 | orchestrator | 2025-10-09 10:19:12.988710 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-10-09 10:19:12.988721 | orchestrator | Thursday 09 October 2025 10:19:09 +0000 (0:00:00.150) 0:00:43.409 ****** 2025-10-09 10:19:12.988732 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:12.988743 | orchestrator | 2025-10-09 10:19:12.988754 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-10-09 10:19:12.988764 | orchestrator | Thursday 09 October 2025 10:19:09 +0000 (0:00:00.126) 0:00:43.535 ****** 2025-10-09 10:19:12.988775 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:12.988786 | orchestrator | 2025-10-09 10:19:12.988797 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-10-09 10:19:12.988808 | orchestrator | Thursday 09 October 2025 10:19:09 +0000 (0:00:00.163) 0:00:43.699 ****** 2025-10-09 10:19:12.988819 | orchestrator | ok: [testbed-node-4] => { 2025-10-09 10:19:12.988830 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-10-09 10:19:12.988841 | orchestrator | } 2025-10-09 10:19:12.988852 | orchestrator | 2025-10-09 10:19:12.988863 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-10-09 10:19:12.988874 | orchestrator | Thursday 09 October 2025 10:19:09 +0000 (0:00:00.167) 0:00:43.867 ****** 2025-10-09 10:19:12.988885 | orchestrator | ok: [testbed-node-4] => { 2025-10-09 10:19:12.988896 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-10-09 10:19:12.988907 | orchestrator | } 2025-10-09 10:19:12.988918 | orchestrator | 2025-10-09 10:19:12.988929 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-10-09 10:19:12.988940 | orchestrator | Thursday 09 October 2025 10:19:09 +0000 (0:00:00.154) 0:00:44.022 ****** 2025-10-09 10:19:12.988951 | orchestrator | ok: [testbed-node-4] => { 2025-10-09 10:19:12.988962 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-10-09 10:19:12.988980 | orchestrator | } 2025-10-09 10:19:12.988991 | orchestrator | 2025-10-09 10:19:12.989002 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-10-09 10:19:12.989013 | orchestrator | Thursday 09 October 2025 10:19:10 +0000 (0:00:00.399) 0:00:44.422 ****** 2025-10-09 10:19:12.989024 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:19:12.989035 | orchestrator | 2025-10-09 10:19:12.989047 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-10-09 10:19:12.989058 | orchestrator | Thursday 09 October 2025 10:19:10 +0000 (0:00:00.588) 0:00:45.010 ****** 2025-10-09 10:19:12.989069 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:19:12.989080 | orchestrator | 2025-10-09 10:19:12.989091 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-10-09 10:19:12.989102 | orchestrator | Thursday 09 October 2025 10:19:11 +0000 (0:00:00.555) 0:00:45.565 ****** 2025-10-09 10:19:12.989113 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:19:12.989124 | orchestrator | 2025-10-09 10:19:12.989136 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-10-09 10:19:12.989147 | orchestrator | Thursday 09 October 2025 10:19:11 +0000 (0:00:00.560) 0:00:46.126 ****** 2025-10-09 10:19:12.989158 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:19:12.989169 | orchestrator | 2025-10-09 10:19:12.989180 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-10-09 10:19:12.989191 | orchestrator | Thursday 09 October 2025 10:19:11 +0000 (0:00:00.170) 0:00:46.296 ****** 2025-10-09 10:19:12.989201 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:12.989212 | orchestrator | 2025-10-09 10:19:12.989223 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-10-09 10:19:12.989234 | orchestrator | Thursday 09 October 2025 10:19:12 +0000 (0:00:00.127) 0:00:46.423 ****** 2025-10-09 10:19:12.989253 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:12.989264 | orchestrator | 2025-10-09 10:19:12.989275 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-10-09 10:19:12.989286 | orchestrator | Thursday 09 October 2025 10:19:12 +0000 (0:00:00.138) 0:00:46.562 ****** 2025-10-09 10:19:12.989317 | orchestrator | ok: [testbed-node-4] => { 2025-10-09 10:19:12.989328 | orchestrator |  "vgs_report": { 2025-10-09 10:19:12.989339 | orchestrator |  "vg": [] 2025-10-09 10:19:12.989350 | orchestrator |  } 2025-10-09 10:19:12.989361 | orchestrator | } 2025-10-09 10:19:12.989372 | orchestrator | 2025-10-09 10:19:12.989383 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-10-09 10:19:12.989394 | orchestrator | Thursday 09 October 2025 10:19:12 +0000 (0:00:00.170) 0:00:46.733 ****** 2025-10-09 10:19:12.989405 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:12.989416 | orchestrator | 2025-10-09 10:19:12.989427 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-10-09 10:19:12.989438 | orchestrator | Thursday 09 October 2025 10:19:12 +0000 (0:00:00.140) 0:00:46.874 ****** 2025-10-09 10:19:12.989449 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:12.989459 | orchestrator | 2025-10-09 10:19:12.989470 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-10-09 10:19:12.989481 | orchestrator | Thursday 09 October 2025 10:19:12 +0000 (0:00:00.149) 0:00:47.023 ****** 2025-10-09 10:19:12.989492 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:12.989503 | orchestrator | 2025-10-09 10:19:12.989514 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-10-09 10:19:12.989525 | orchestrator | Thursday 09 October 2025 10:19:12 +0000 (0:00:00.155) 0:00:47.179 ****** 2025-10-09 10:19:12.989536 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:12.989547 | orchestrator | 2025-10-09 10:19:12.989558 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-10-09 10:19:12.989576 | orchestrator | Thursday 09 October 2025 10:19:12 +0000 (0:00:00.150) 0:00:47.329 ****** 2025-10-09 10:19:18.182547 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:18.182651 | orchestrator | 2025-10-09 10:19:18.182691 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-10-09 10:19:18.182704 | orchestrator | Thursday 09 October 2025 10:19:13 +0000 (0:00:00.403) 0:00:47.733 ****** 2025-10-09 10:19:18.182715 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:18.182726 | orchestrator | 2025-10-09 10:19:18.182737 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-10-09 10:19:18.182749 | orchestrator | Thursday 09 October 2025 10:19:13 +0000 (0:00:00.135) 0:00:47.868 ****** 2025-10-09 10:19:18.182759 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:18.182770 | orchestrator | 2025-10-09 10:19:18.182781 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-10-09 10:19:18.182791 | orchestrator | Thursday 09 October 2025 10:19:13 +0000 (0:00:00.153) 0:00:48.022 ****** 2025-10-09 10:19:18.182802 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:18.182813 | orchestrator | 2025-10-09 10:19:18.182824 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-10-09 10:19:18.182835 | orchestrator | Thursday 09 October 2025 10:19:13 +0000 (0:00:00.144) 0:00:48.166 ****** 2025-10-09 10:19:18.182846 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:18.182856 | orchestrator | 2025-10-09 10:19:18.182867 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-10-09 10:19:18.182878 | orchestrator | Thursday 09 October 2025 10:19:13 +0000 (0:00:00.179) 0:00:48.346 ****** 2025-10-09 10:19:18.182888 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:18.182899 | orchestrator | 2025-10-09 10:19:18.182910 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-10-09 10:19:18.182921 | orchestrator | Thursday 09 October 2025 10:19:14 +0000 (0:00:00.180) 0:00:48.526 ****** 2025-10-09 10:19:18.182931 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:18.182942 | orchestrator | 2025-10-09 10:19:18.182953 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-10-09 10:19:18.182963 | orchestrator | Thursday 09 October 2025 10:19:14 +0000 (0:00:00.171) 0:00:48.698 ****** 2025-10-09 10:19:18.182974 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:18.182985 | orchestrator | 2025-10-09 10:19:18.182995 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-10-09 10:19:18.183006 | orchestrator | Thursday 09 October 2025 10:19:14 +0000 (0:00:00.171) 0:00:48.870 ****** 2025-10-09 10:19:18.183017 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:18.183028 | orchestrator | 2025-10-09 10:19:18.183038 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-10-09 10:19:18.183049 | orchestrator | Thursday 09 October 2025 10:19:14 +0000 (0:00:00.190) 0:00:49.061 ****** 2025-10-09 10:19:18.183060 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:18.183070 | orchestrator | 2025-10-09 10:19:18.183083 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-10-09 10:19:18.183095 | orchestrator | Thursday 09 October 2025 10:19:14 +0000 (0:00:00.157) 0:00:49.218 ****** 2025-10-09 10:19:18.183123 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'})  2025-10-09 10:19:18.183138 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'})  2025-10-09 10:19:18.183151 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:18.183163 | orchestrator | 2025-10-09 10:19:18.183175 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-10-09 10:19:18.183188 | orchestrator | Thursday 09 October 2025 10:19:15 +0000 (0:00:00.160) 0:00:49.379 ****** 2025-10-09 10:19:18.183201 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'})  2025-10-09 10:19:18.183214 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'})  2025-10-09 10:19:18.183235 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:18.183248 | orchestrator | 2025-10-09 10:19:18.183260 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-10-09 10:19:18.183273 | orchestrator | Thursday 09 October 2025 10:19:15 +0000 (0:00:00.152) 0:00:49.532 ****** 2025-10-09 10:19:18.183285 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'})  2025-10-09 10:19:18.183323 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'})  2025-10-09 10:19:18.183336 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:18.183348 | orchestrator | 2025-10-09 10:19:18.183360 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-10-09 10:19:18.183373 | orchestrator | Thursday 09 October 2025 10:19:15 +0000 (0:00:00.151) 0:00:49.684 ****** 2025-10-09 10:19:18.183385 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'})  2025-10-09 10:19:18.183398 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'})  2025-10-09 10:19:18.183411 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:18.183423 | orchestrator | 2025-10-09 10:19:18.183434 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-10-09 10:19:18.183463 | orchestrator | Thursday 09 October 2025 10:19:15 +0000 (0:00:00.405) 0:00:50.089 ****** 2025-10-09 10:19:18.183476 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'})  2025-10-09 10:19:18.183487 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'})  2025-10-09 10:19:18.183498 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:18.183508 | orchestrator | 2025-10-09 10:19:18.183519 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-10-09 10:19:18.183530 | orchestrator | Thursday 09 October 2025 10:19:15 +0000 (0:00:00.164) 0:00:50.254 ****** 2025-10-09 10:19:18.183541 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'})  2025-10-09 10:19:18.183552 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'})  2025-10-09 10:19:18.183563 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:18.183574 | orchestrator | 2025-10-09 10:19:18.183585 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-10-09 10:19:18.183596 | orchestrator | Thursday 09 October 2025 10:19:16 +0000 (0:00:00.175) 0:00:50.430 ****** 2025-10-09 10:19:18.183607 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'})  2025-10-09 10:19:18.183618 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'})  2025-10-09 10:19:18.183629 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:18.183640 | orchestrator | 2025-10-09 10:19:18.183651 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-10-09 10:19:18.183662 | orchestrator | Thursday 09 October 2025 10:19:16 +0000 (0:00:00.174) 0:00:50.605 ****** 2025-10-09 10:19:18.183673 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'})  2025-10-09 10:19:18.183691 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'})  2025-10-09 10:19:18.183702 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:18.183713 | orchestrator | 2025-10-09 10:19:18.183729 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-10-09 10:19:18.183741 | orchestrator | Thursday 09 October 2025 10:19:16 +0000 (0:00:00.171) 0:00:50.777 ****** 2025-10-09 10:19:18.183752 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:19:18.183763 | orchestrator | 2025-10-09 10:19:18.183774 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-10-09 10:19:18.183784 | orchestrator | Thursday 09 October 2025 10:19:16 +0000 (0:00:00.516) 0:00:51.293 ****** 2025-10-09 10:19:18.183795 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:19:18.183806 | orchestrator | 2025-10-09 10:19:18.183817 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-10-09 10:19:18.183828 | orchestrator | Thursday 09 October 2025 10:19:17 +0000 (0:00:00.539) 0:00:51.832 ****** 2025-10-09 10:19:18.183838 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:19:18.183849 | orchestrator | 2025-10-09 10:19:18.183860 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-10-09 10:19:18.183871 | orchestrator | Thursday 09 October 2025 10:19:17 +0000 (0:00:00.156) 0:00:51.989 ****** 2025-10-09 10:19:18.183882 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'vg_name': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'}) 2025-10-09 10:19:18.183895 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'vg_name': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'}) 2025-10-09 10:19:18.183906 | orchestrator | 2025-10-09 10:19:18.183917 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-10-09 10:19:18.183927 | orchestrator | Thursday 09 October 2025 10:19:17 +0000 (0:00:00.208) 0:00:52.198 ****** 2025-10-09 10:19:18.183938 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'})  2025-10-09 10:19:18.183950 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'})  2025-10-09 10:19:18.183961 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:18.183971 | orchestrator | 2025-10-09 10:19:18.183983 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-10-09 10:19:18.183993 | orchestrator | Thursday 09 October 2025 10:19:18 +0000 (0:00:00.161) 0:00:52.359 ****** 2025-10-09 10:19:18.184004 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'})  2025-10-09 10:19:18.184015 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'})  2025-10-09 10:19:18.184032 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:24.541714 | orchestrator | 2025-10-09 10:19:24.541807 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-10-09 10:19:24.541821 | orchestrator | Thursday 09 October 2025 10:19:18 +0000 (0:00:00.166) 0:00:52.526 ****** 2025-10-09 10:19:24.541833 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'})  2025-10-09 10:19:24.541845 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'})  2025-10-09 10:19:24.541855 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:19:24.541867 | orchestrator | 2025-10-09 10:19:24.541877 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-10-09 10:19:24.541887 | orchestrator | Thursday 09 October 2025 10:19:18 +0000 (0:00:00.155) 0:00:52.681 ****** 2025-10-09 10:19:24.541918 | orchestrator | ok: [testbed-node-4] => { 2025-10-09 10:19:24.541928 | orchestrator |  "lvm_report": { 2025-10-09 10:19:24.541939 | orchestrator |  "lv": [ 2025-10-09 10:19:24.541949 | orchestrator |  { 2025-10-09 10:19:24.541959 | orchestrator |  "lv_name": "osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee", 2025-10-09 10:19:24.541969 | orchestrator |  "vg_name": "ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee" 2025-10-09 10:19:24.541979 | orchestrator |  }, 2025-10-09 10:19:24.541989 | orchestrator |  { 2025-10-09 10:19:24.541999 | orchestrator |  "lv_name": "osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f", 2025-10-09 10:19:24.542008 | orchestrator |  "vg_name": "ceph-db411f8a-05b0-54f7-b748-fd517a3c676f" 2025-10-09 10:19:24.542058 | orchestrator |  } 2025-10-09 10:19:24.542070 | orchestrator |  ], 2025-10-09 10:19:24.542080 | orchestrator |  "pv": [ 2025-10-09 10:19:24.542090 | orchestrator |  { 2025-10-09 10:19:24.542099 | orchestrator |  "pv_name": "/dev/sdb", 2025-10-09 10:19:24.542109 | orchestrator |  "vg_name": "ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee" 2025-10-09 10:19:24.542119 | orchestrator |  }, 2025-10-09 10:19:24.542128 | orchestrator |  { 2025-10-09 10:19:24.542138 | orchestrator |  "pv_name": "/dev/sdc", 2025-10-09 10:19:24.542148 | orchestrator |  "vg_name": "ceph-db411f8a-05b0-54f7-b748-fd517a3c676f" 2025-10-09 10:19:24.542158 | orchestrator |  } 2025-10-09 10:19:24.542167 | orchestrator |  ] 2025-10-09 10:19:24.542177 | orchestrator |  } 2025-10-09 10:19:24.542187 | orchestrator | } 2025-10-09 10:19:24.542197 | orchestrator | 2025-10-09 10:19:24.542206 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-10-09 10:19:24.542216 | orchestrator | 2025-10-09 10:19:24.542226 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-10-09 10:19:24.542236 | orchestrator | Thursday 09 October 2025 10:19:18 +0000 (0:00:00.524) 0:00:53.206 ****** 2025-10-09 10:19:24.542246 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-10-09 10:19:24.542258 | orchestrator | 2025-10-09 10:19:24.542269 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-10-09 10:19:24.542280 | orchestrator | Thursday 09 October 2025 10:19:19 +0000 (0:00:00.283) 0:00:53.490 ****** 2025-10-09 10:19:24.542329 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:19:24.542341 | orchestrator | 2025-10-09 10:19:24.542352 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:19:24.542363 | orchestrator | Thursday 09 October 2025 10:19:19 +0000 (0:00:00.257) 0:00:53.747 ****** 2025-10-09 10:19:24.542374 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-10-09 10:19:24.542386 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-10-09 10:19:24.542397 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-10-09 10:19:24.542408 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-10-09 10:19:24.542419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-10-09 10:19:24.542430 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-10-09 10:19:24.542440 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-10-09 10:19:24.542452 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-10-09 10:19:24.542463 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-10-09 10:19:24.542474 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-10-09 10:19:24.542484 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-10-09 10:19:24.542505 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-10-09 10:19:24.542516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-10-09 10:19:24.542527 | orchestrator | 2025-10-09 10:19:24.542538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:19:24.542549 | orchestrator | Thursday 09 October 2025 10:19:19 +0000 (0:00:00.429) 0:00:54.177 ****** 2025-10-09 10:19:24.542560 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:24.542574 | orchestrator | 2025-10-09 10:19:24.542586 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:19:24.542597 | orchestrator | Thursday 09 October 2025 10:19:20 +0000 (0:00:00.203) 0:00:54.381 ****** 2025-10-09 10:19:24.542607 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:24.542617 | orchestrator | 2025-10-09 10:19:24.542627 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:19:24.542653 | orchestrator | Thursday 09 October 2025 10:19:20 +0000 (0:00:00.210) 0:00:54.592 ****** 2025-10-09 10:19:24.542663 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:24.542672 | orchestrator | 2025-10-09 10:19:24.542682 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:19:24.542691 | orchestrator | Thursday 09 October 2025 10:19:20 +0000 (0:00:00.209) 0:00:54.802 ****** 2025-10-09 10:19:24.542701 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:24.542710 | orchestrator | 2025-10-09 10:19:24.542720 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:19:24.542729 | orchestrator | Thursday 09 October 2025 10:19:20 +0000 (0:00:00.203) 0:00:55.005 ****** 2025-10-09 10:19:24.542739 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:24.542748 | orchestrator | 2025-10-09 10:19:24.542802 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:19:24.542812 | orchestrator | Thursday 09 October 2025 10:19:20 +0000 (0:00:00.212) 0:00:55.218 ****** 2025-10-09 10:19:24.542822 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:24.542832 | orchestrator | 2025-10-09 10:19:24.542841 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:19:24.542851 | orchestrator | Thursday 09 October 2025 10:19:21 +0000 (0:00:00.666) 0:00:55.885 ****** 2025-10-09 10:19:24.542860 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:24.542870 | orchestrator | 2025-10-09 10:19:24.542879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:19:24.542889 | orchestrator | Thursday 09 October 2025 10:19:21 +0000 (0:00:00.224) 0:00:56.109 ****** 2025-10-09 10:19:24.542898 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:24.542908 | orchestrator | 2025-10-09 10:19:24.542917 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:19:24.542927 | orchestrator | Thursday 09 October 2025 10:19:21 +0000 (0:00:00.216) 0:00:56.325 ****** 2025-10-09 10:19:24.542936 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0) 2025-10-09 10:19:24.542948 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0) 2025-10-09 10:19:24.542957 | orchestrator | 2025-10-09 10:19:24.542967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:19:24.542976 | orchestrator | Thursday 09 October 2025 10:19:22 +0000 (0:00:00.440) 0:00:56.765 ****** 2025-10-09 10:19:24.542986 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6ad7b454-0b43-4b47-a404-c2fa6c30a397) 2025-10-09 10:19:24.542995 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6ad7b454-0b43-4b47-a404-c2fa6c30a397) 2025-10-09 10:19:24.543005 | orchestrator | 2025-10-09 10:19:24.543014 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:19:24.543024 | orchestrator | Thursday 09 October 2025 10:19:22 +0000 (0:00:00.426) 0:00:57.192 ****** 2025-10-09 10:19:24.543046 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_46e0cf8b-6c4d-4615-bce2-a8b81f113425) 2025-10-09 10:19:24.543056 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_46e0cf8b-6c4d-4615-bce2-a8b81f113425) 2025-10-09 10:19:24.543065 | orchestrator | 2025-10-09 10:19:24.543075 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:19:24.543085 | orchestrator | Thursday 09 October 2025 10:19:23 +0000 (0:00:00.456) 0:00:57.649 ****** 2025-10-09 10:19:24.543094 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_94b6a137-07a9-47a7-90bd-af13afc1319f) 2025-10-09 10:19:24.543104 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_94b6a137-07a9-47a7-90bd-af13afc1319f) 2025-10-09 10:19:24.543113 | orchestrator | 2025-10-09 10:19:24.543123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-10-09 10:19:24.543133 | orchestrator | Thursday 09 October 2025 10:19:23 +0000 (0:00:00.440) 0:00:58.090 ****** 2025-10-09 10:19:24.543142 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-10-09 10:19:24.543151 | orchestrator | 2025-10-09 10:19:24.543161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:24.543171 | orchestrator | Thursday 09 October 2025 10:19:24 +0000 (0:00:00.348) 0:00:58.438 ****** 2025-10-09 10:19:24.543180 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-10-09 10:19:24.543190 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-10-09 10:19:24.543199 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-10-09 10:19:24.543209 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-10-09 10:19:24.543218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-10-09 10:19:24.543228 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-10-09 10:19:24.543237 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-10-09 10:19:24.543247 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-10-09 10:19:24.543256 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-10-09 10:19:24.543266 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-10-09 10:19:24.543275 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-10-09 10:19:24.543309 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-10-09 10:19:33.695011 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-10-09 10:19:33.695112 | orchestrator | 2025-10-09 10:19:33.695128 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:33.695141 | orchestrator | Thursday 09 October 2025 10:19:24 +0000 (0:00:00.440) 0:00:58.879 ****** 2025-10-09 10:19:33.695152 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:33.695164 | orchestrator | 2025-10-09 10:19:33.695176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:33.695187 | orchestrator | Thursday 09 October 2025 10:19:24 +0000 (0:00:00.201) 0:00:59.080 ****** 2025-10-09 10:19:33.695198 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:33.695208 | orchestrator | 2025-10-09 10:19:33.695219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:33.695230 | orchestrator | Thursday 09 October 2025 10:19:25 +0000 (0:00:00.730) 0:00:59.811 ****** 2025-10-09 10:19:33.695241 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:33.695251 | orchestrator | 2025-10-09 10:19:33.695262 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:33.695354 | orchestrator | Thursday 09 October 2025 10:19:25 +0000 (0:00:00.198) 0:01:00.010 ****** 2025-10-09 10:19:33.695375 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:33.695392 | orchestrator | 2025-10-09 10:19:33.695403 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:33.695414 | orchestrator | Thursday 09 October 2025 10:19:25 +0000 (0:00:00.183) 0:01:00.193 ****** 2025-10-09 10:19:33.695425 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:33.695436 | orchestrator | 2025-10-09 10:19:33.695447 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:33.695458 | orchestrator | Thursday 09 October 2025 10:19:26 +0000 (0:00:00.192) 0:01:00.386 ****** 2025-10-09 10:19:33.695469 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:33.695480 | orchestrator | 2025-10-09 10:19:33.695491 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:33.695502 | orchestrator | Thursday 09 October 2025 10:19:26 +0000 (0:00:00.176) 0:01:00.562 ****** 2025-10-09 10:19:33.695513 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:33.695524 | orchestrator | 2025-10-09 10:19:33.695535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:33.695546 | orchestrator | Thursday 09 October 2025 10:19:26 +0000 (0:00:00.200) 0:01:00.763 ****** 2025-10-09 10:19:33.695556 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:33.695567 | orchestrator | 2025-10-09 10:19:33.695578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:33.695589 | orchestrator | Thursday 09 October 2025 10:19:26 +0000 (0:00:00.263) 0:01:01.026 ****** 2025-10-09 10:19:33.695600 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-10-09 10:19:33.695612 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-10-09 10:19:33.695638 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-10-09 10:19:33.695649 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-10-09 10:19:33.695660 | orchestrator | 2025-10-09 10:19:33.695671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:33.695682 | orchestrator | Thursday 09 October 2025 10:19:27 +0000 (0:00:00.594) 0:01:01.621 ****** 2025-10-09 10:19:33.695693 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:33.695703 | orchestrator | 2025-10-09 10:19:33.695714 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:33.695725 | orchestrator | Thursday 09 October 2025 10:19:27 +0000 (0:00:00.190) 0:01:01.812 ****** 2025-10-09 10:19:33.695736 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:33.695747 | orchestrator | 2025-10-09 10:19:33.695758 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:33.695769 | orchestrator | Thursday 09 October 2025 10:19:27 +0000 (0:00:00.186) 0:01:01.998 ****** 2025-10-09 10:19:33.695781 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:33.695791 | orchestrator | 2025-10-09 10:19:33.695802 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-10-09 10:19:33.695813 | orchestrator | Thursday 09 October 2025 10:19:27 +0000 (0:00:00.205) 0:01:02.204 ****** 2025-10-09 10:19:33.695824 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:33.695835 | orchestrator | 2025-10-09 10:19:33.695846 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-10-09 10:19:33.695857 | orchestrator | Thursday 09 October 2025 10:19:28 +0000 (0:00:00.197) 0:01:02.401 ****** 2025-10-09 10:19:33.695867 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:33.695878 | orchestrator | 2025-10-09 10:19:33.695889 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-10-09 10:19:33.695900 | orchestrator | Thursday 09 October 2025 10:19:28 +0000 (0:00:00.292) 0:01:02.694 ****** 2025-10-09 10:19:33.695911 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '83d577c9-ff1a-5f1d-bd0e-44f99d742f78'}}) 2025-10-09 10:19:33.695922 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8ce20a60-fba3-5536-8b48-1e48c039a9b4'}}) 2025-10-09 10:19:33.695943 | orchestrator | 2025-10-09 10:19:33.695954 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-10-09 10:19:33.695965 | orchestrator | Thursday 09 October 2025 10:19:28 +0000 (0:00:00.195) 0:01:02.889 ****** 2025-10-09 10:19:33.695978 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'}) 2025-10-09 10:19:33.695990 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'}) 2025-10-09 10:19:33.696002 | orchestrator | 2025-10-09 10:19:33.696012 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-10-09 10:19:33.696042 | orchestrator | Thursday 09 October 2025 10:19:30 +0000 (0:00:01.944) 0:01:04.834 ****** 2025-10-09 10:19:33.696054 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'})  2025-10-09 10:19:33.696066 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'})  2025-10-09 10:19:33.696077 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:33.696088 | orchestrator | 2025-10-09 10:19:33.696099 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-10-09 10:19:33.696110 | orchestrator | Thursday 09 October 2025 10:19:30 +0000 (0:00:00.175) 0:01:05.010 ****** 2025-10-09 10:19:33.696121 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'}) 2025-10-09 10:19:33.696132 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'}) 2025-10-09 10:19:33.696144 | orchestrator | 2025-10-09 10:19:33.696155 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-10-09 10:19:33.696166 | orchestrator | Thursday 09 October 2025 10:19:32 +0000 (0:00:01.349) 0:01:06.359 ****** 2025-10-09 10:19:33.696177 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'})  2025-10-09 10:19:33.696188 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'})  2025-10-09 10:19:33.696199 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:33.696209 | orchestrator | 2025-10-09 10:19:33.696220 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-10-09 10:19:33.696231 | orchestrator | Thursday 09 October 2025 10:19:32 +0000 (0:00:00.171) 0:01:06.531 ****** 2025-10-09 10:19:33.696242 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:33.696253 | orchestrator | 2025-10-09 10:19:33.696264 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-10-09 10:19:33.696275 | orchestrator | Thursday 09 October 2025 10:19:32 +0000 (0:00:00.137) 0:01:06.668 ****** 2025-10-09 10:19:33.696307 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'})  2025-10-09 10:19:33.696324 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'})  2025-10-09 10:19:33.696336 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:33.696347 | orchestrator | 2025-10-09 10:19:33.696358 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-10-09 10:19:33.696368 | orchestrator | Thursday 09 October 2025 10:19:32 +0000 (0:00:00.189) 0:01:06.858 ****** 2025-10-09 10:19:33.696379 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:33.696397 | orchestrator | 2025-10-09 10:19:33.696408 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-10-09 10:19:33.696419 | orchestrator | Thursday 09 October 2025 10:19:32 +0000 (0:00:00.144) 0:01:07.003 ****** 2025-10-09 10:19:33.696430 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'})  2025-10-09 10:19:33.696441 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'})  2025-10-09 10:19:33.696452 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:33.696463 | orchestrator | 2025-10-09 10:19:33.696474 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-10-09 10:19:33.696484 | orchestrator | Thursday 09 October 2025 10:19:32 +0000 (0:00:00.165) 0:01:07.168 ****** 2025-10-09 10:19:33.696495 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:33.696506 | orchestrator | 2025-10-09 10:19:33.696517 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-10-09 10:19:33.696528 | orchestrator | Thursday 09 October 2025 10:19:32 +0000 (0:00:00.156) 0:01:07.324 ****** 2025-10-09 10:19:33.696538 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'})  2025-10-09 10:19:33.696549 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'})  2025-10-09 10:19:33.696560 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:33.696571 | orchestrator | 2025-10-09 10:19:33.696582 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-10-09 10:19:33.696593 | orchestrator | Thursday 09 October 2025 10:19:33 +0000 (0:00:00.163) 0:01:07.488 ****** 2025-10-09 10:19:33.696604 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:19:33.696615 | orchestrator | 2025-10-09 10:19:33.696626 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-10-09 10:19:33.696637 | orchestrator | Thursday 09 October 2025 10:19:33 +0000 (0:00:00.382) 0:01:07.870 ****** 2025-10-09 10:19:33.696655 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'})  2025-10-09 10:19:40.246934 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'})  2025-10-09 10:19:40.247040 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.247055 | orchestrator | 2025-10-09 10:19:40.247067 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-10-09 10:19:40.247081 | orchestrator | Thursday 09 October 2025 10:19:33 +0000 (0:00:00.166) 0:01:08.037 ****** 2025-10-09 10:19:40.247092 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'})  2025-10-09 10:19:40.247103 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'})  2025-10-09 10:19:40.247114 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.247125 | orchestrator | 2025-10-09 10:19:40.247137 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-10-09 10:19:40.247148 | orchestrator | Thursday 09 October 2025 10:19:33 +0000 (0:00:00.189) 0:01:08.226 ****** 2025-10-09 10:19:40.247159 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'})  2025-10-09 10:19:40.247170 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'})  2025-10-09 10:19:40.247181 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.247216 | orchestrator | 2025-10-09 10:19:40.247228 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-10-09 10:19:40.247239 | orchestrator | Thursday 09 October 2025 10:19:34 +0000 (0:00:00.177) 0:01:08.404 ****** 2025-10-09 10:19:40.247250 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.247261 | orchestrator | 2025-10-09 10:19:40.247272 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-10-09 10:19:40.247283 | orchestrator | Thursday 09 October 2025 10:19:34 +0000 (0:00:00.150) 0:01:08.554 ****** 2025-10-09 10:19:40.247358 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.247369 | orchestrator | 2025-10-09 10:19:40.247380 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-10-09 10:19:40.247391 | orchestrator | Thursday 09 October 2025 10:19:34 +0000 (0:00:00.157) 0:01:08.712 ****** 2025-10-09 10:19:40.247401 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.247412 | orchestrator | 2025-10-09 10:19:40.247423 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-10-09 10:19:40.247435 | orchestrator | Thursday 09 October 2025 10:19:34 +0000 (0:00:00.136) 0:01:08.848 ****** 2025-10-09 10:19:40.247446 | orchestrator | ok: [testbed-node-5] => { 2025-10-09 10:19:40.247457 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-10-09 10:19:40.247470 | orchestrator | } 2025-10-09 10:19:40.247483 | orchestrator | 2025-10-09 10:19:40.247495 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-10-09 10:19:40.247508 | orchestrator | Thursday 09 October 2025 10:19:34 +0000 (0:00:00.146) 0:01:08.995 ****** 2025-10-09 10:19:40.247520 | orchestrator | ok: [testbed-node-5] => { 2025-10-09 10:19:40.247532 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-10-09 10:19:40.247545 | orchestrator | } 2025-10-09 10:19:40.247557 | orchestrator | 2025-10-09 10:19:40.247569 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-10-09 10:19:40.247583 | orchestrator | Thursday 09 October 2025 10:19:34 +0000 (0:00:00.157) 0:01:09.152 ****** 2025-10-09 10:19:40.247596 | orchestrator | ok: [testbed-node-5] => { 2025-10-09 10:19:40.247608 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-10-09 10:19:40.247621 | orchestrator | } 2025-10-09 10:19:40.247633 | orchestrator | 2025-10-09 10:19:40.247645 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-10-09 10:19:40.247658 | orchestrator | Thursday 09 October 2025 10:19:34 +0000 (0:00:00.156) 0:01:09.309 ****** 2025-10-09 10:19:40.247671 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:19:40.247683 | orchestrator | 2025-10-09 10:19:40.247695 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-10-09 10:19:40.247708 | orchestrator | Thursday 09 October 2025 10:19:35 +0000 (0:00:00.565) 0:01:09.875 ****** 2025-10-09 10:19:40.247720 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:19:40.247732 | orchestrator | 2025-10-09 10:19:40.247744 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-10-09 10:19:40.247757 | orchestrator | Thursday 09 October 2025 10:19:36 +0000 (0:00:00.548) 0:01:10.423 ****** 2025-10-09 10:19:40.247768 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:19:40.247781 | orchestrator | 2025-10-09 10:19:40.247793 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-10-09 10:19:40.247805 | orchestrator | Thursday 09 October 2025 10:19:36 +0000 (0:00:00.748) 0:01:11.172 ****** 2025-10-09 10:19:40.247818 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:19:40.247829 | orchestrator | 2025-10-09 10:19:40.247840 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-10-09 10:19:40.247851 | orchestrator | Thursday 09 October 2025 10:19:36 +0000 (0:00:00.153) 0:01:11.325 ****** 2025-10-09 10:19:40.247862 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.247873 | orchestrator | 2025-10-09 10:19:40.247884 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-10-09 10:19:40.247894 | orchestrator | Thursday 09 October 2025 10:19:37 +0000 (0:00:00.143) 0:01:11.468 ****** 2025-10-09 10:19:40.247915 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.247926 | orchestrator | 2025-10-09 10:19:40.247937 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-10-09 10:19:40.247948 | orchestrator | Thursday 09 October 2025 10:19:37 +0000 (0:00:00.122) 0:01:11.591 ****** 2025-10-09 10:19:40.247959 | orchestrator | ok: [testbed-node-5] => { 2025-10-09 10:19:40.247988 | orchestrator |  "vgs_report": { 2025-10-09 10:19:40.248000 | orchestrator |  "vg": [] 2025-10-09 10:19:40.248028 | orchestrator |  } 2025-10-09 10:19:40.248040 | orchestrator | } 2025-10-09 10:19:40.248051 | orchestrator | 2025-10-09 10:19:40.248062 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-10-09 10:19:40.248073 | orchestrator | Thursday 09 October 2025 10:19:37 +0000 (0:00:00.144) 0:01:11.735 ****** 2025-10-09 10:19:40.248084 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.248095 | orchestrator | 2025-10-09 10:19:40.248106 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-10-09 10:19:40.248117 | orchestrator | Thursday 09 October 2025 10:19:37 +0000 (0:00:00.135) 0:01:11.871 ****** 2025-10-09 10:19:40.248128 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.248139 | orchestrator | 2025-10-09 10:19:40.248150 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-10-09 10:19:40.248161 | orchestrator | Thursday 09 October 2025 10:19:37 +0000 (0:00:00.143) 0:01:12.014 ****** 2025-10-09 10:19:40.248171 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.248182 | orchestrator | 2025-10-09 10:19:40.248193 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-10-09 10:19:40.248204 | orchestrator | Thursday 09 October 2025 10:19:37 +0000 (0:00:00.142) 0:01:12.157 ****** 2025-10-09 10:19:40.248215 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.248226 | orchestrator | 2025-10-09 10:19:40.248236 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-10-09 10:19:40.248247 | orchestrator | Thursday 09 October 2025 10:19:37 +0000 (0:00:00.154) 0:01:12.311 ****** 2025-10-09 10:19:40.248258 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.248268 | orchestrator | 2025-10-09 10:19:40.248279 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-10-09 10:19:40.248309 | orchestrator | Thursday 09 October 2025 10:19:38 +0000 (0:00:00.146) 0:01:12.458 ****** 2025-10-09 10:19:40.248320 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.248331 | orchestrator | 2025-10-09 10:19:40.248342 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-10-09 10:19:40.248353 | orchestrator | Thursday 09 October 2025 10:19:38 +0000 (0:00:00.155) 0:01:12.614 ****** 2025-10-09 10:19:40.248363 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.248374 | orchestrator | 2025-10-09 10:19:40.248385 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-10-09 10:19:40.248396 | orchestrator | Thursday 09 October 2025 10:19:38 +0000 (0:00:00.159) 0:01:12.773 ****** 2025-10-09 10:19:40.248407 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.248417 | orchestrator | 2025-10-09 10:19:40.248428 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-10-09 10:19:40.248439 | orchestrator | Thursday 09 October 2025 10:19:38 +0000 (0:00:00.373) 0:01:13.147 ****** 2025-10-09 10:19:40.248449 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.248460 | orchestrator | 2025-10-09 10:19:40.248471 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-10-09 10:19:40.248487 | orchestrator | Thursday 09 October 2025 10:19:38 +0000 (0:00:00.148) 0:01:13.295 ****** 2025-10-09 10:19:40.248498 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.248509 | orchestrator | 2025-10-09 10:19:40.248520 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-10-09 10:19:40.248531 | orchestrator | Thursday 09 October 2025 10:19:39 +0000 (0:00:00.164) 0:01:13.460 ****** 2025-10-09 10:19:40.248541 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.248559 | orchestrator | 2025-10-09 10:19:40.248570 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-10-09 10:19:40.248581 | orchestrator | Thursday 09 October 2025 10:19:39 +0000 (0:00:00.133) 0:01:13.593 ****** 2025-10-09 10:19:40.248592 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.248603 | orchestrator | 2025-10-09 10:19:40.248613 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-10-09 10:19:40.248624 | orchestrator | Thursday 09 October 2025 10:19:39 +0000 (0:00:00.183) 0:01:13.777 ****** 2025-10-09 10:19:40.248635 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.248646 | orchestrator | 2025-10-09 10:19:40.248657 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-10-09 10:19:40.248668 | orchestrator | Thursday 09 October 2025 10:19:39 +0000 (0:00:00.147) 0:01:13.924 ****** 2025-10-09 10:19:40.248678 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.248689 | orchestrator | 2025-10-09 10:19:40.248700 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-10-09 10:19:40.248711 | orchestrator | Thursday 09 October 2025 10:19:39 +0000 (0:00:00.157) 0:01:14.082 ****** 2025-10-09 10:19:40.248722 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'})  2025-10-09 10:19:40.248733 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'})  2025-10-09 10:19:40.248744 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.248754 | orchestrator | 2025-10-09 10:19:40.248765 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-10-09 10:19:40.248776 | orchestrator | Thursday 09 October 2025 10:19:39 +0000 (0:00:00.171) 0:01:14.253 ****** 2025-10-09 10:19:40.248787 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'})  2025-10-09 10:19:40.248798 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'})  2025-10-09 10:19:40.248809 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:40.248819 | orchestrator | 2025-10-09 10:19:40.248830 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-10-09 10:19:40.248841 | orchestrator | Thursday 09 October 2025 10:19:40 +0000 (0:00:00.176) 0:01:14.430 ****** 2025-10-09 10:19:40.248859 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'})  2025-10-09 10:19:43.425767 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'})  2025-10-09 10:19:43.425863 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:43.425879 | orchestrator | 2025-10-09 10:19:43.425892 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-10-09 10:19:43.425905 | orchestrator | Thursday 09 October 2025 10:19:40 +0000 (0:00:00.161) 0:01:14.591 ****** 2025-10-09 10:19:43.425917 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'})  2025-10-09 10:19:43.425928 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'})  2025-10-09 10:19:43.425939 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:43.425950 | orchestrator | 2025-10-09 10:19:43.425962 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-10-09 10:19:43.425973 | orchestrator | Thursday 09 October 2025 10:19:40 +0000 (0:00:00.159) 0:01:14.751 ****** 2025-10-09 10:19:43.425984 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'})  2025-10-09 10:19:43.426066 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'})  2025-10-09 10:19:43.426079 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:43.426091 | orchestrator | 2025-10-09 10:19:43.426102 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-10-09 10:19:43.426113 | orchestrator | Thursday 09 October 2025 10:19:40 +0000 (0:00:00.173) 0:01:14.924 ****** 2025-10-09 10:19:43.426124 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'})  2025-10-09 10:19:43.426135 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'})  2025-10-09 10:19:43.426147 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:43.426158 | orchestrator | 2025-10-09 10:19:43.426184 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-10-09 10:19:43.426195 | orchestrator | Thursday 09 October 2025 10:19:40 +0000 (0:00:00.390) 0:01:15.315 ****** 2025-10-09 10:19:43.426207 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'})  2025-10-09 10:19:43.426218 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'})  2025-10-09 10:19:43.426229 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:43.426240 | orchestrator | 2025-10-09 10:19:43.426251 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-10-09 10:19:43.426263 | orchestrator | Thursday 09 October 2025 10:19:41 +0000 (0:00:00.186) 0:01:15.501 ****** 2025-10-09 10:19:43.426274 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'})  2025-10-09 10:19:43.426312 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'})  2025-10-09 10:19:43.426326 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:43.426338 | orchestrator | 2025-10-09 10:19:43.426350 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-10-09 10:19:43.426363 | orchestrator | Thursday 09 October 2025 10:19:41 +0000 (0:00:00.170) 0:01:15.672 ****** 2025-10-09 10:19:43.426375 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:19:43.426388 | orchestrator | 2025-10-09 10:19:43.426400 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-10-09 10:19:43.426412 | orchestrator | Thursday 09 October 2025 10:19:41 +0000 (0:00:00.528) 0:01:16.201 ****** 2025-10-09 10:19:43.426424 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:19:43.426436 | orchestrator | 2025-10-09 10:19:43.426448 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-10-09 10:19:43.426460 | orchestrator | Thursday 09 October 2025 10:19:42 +0000 (0:00:00.532) 0:01:16.733 ****** 2025-10-09 10:19:43.426472 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:19:43.426484 | orchestrator | 2025-10-09 10:19:43.426497 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-10-09 10:19:43.426509 | orchestrator | Thursday 09 October 2025 10:19:42 +0000 (0:00:00.160) 0:01:16.894 ****** 2025-10-09 10:19:43.426522 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'vg_name': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'}) 2025-10-09 10:19:43.426536 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'vg_name': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'}) 2025-10-09 10:19:43.426548 | orchestrator | 2025-10-09 10:19:43.426560 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-10-09 10:19:43.426582 | orchestrator | Thursday 09 October 2025 10:19:42 +0000 (0:00:00.174) 0:01:17.068 ****** 2025-10-09 10:19:43.426611 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'})  2025-10-09 10:19:43.426624 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'})  2025-10-09 10:19:43.426636 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:43.426650 | orchestrator | 2025-10-09 10:19:43.426662 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-10-09 10:19:43.426673 | orchestrator | Thursday 09 October 2025 10:19:42 +0000 (0:00:00.147) 0:01:17.215 ****** 2025-10-09 10:19:43.426684 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'})  2025-10-09 10:19:43.426696 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'})  2025-10-09 10:19:43.426707 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:43.426719 | orchestrator | 2025-10-09 10:19:43.426730 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-10-09 10:19:43.426741 | orchestrator | Thursday 09 October 2025 10:19:43 +0000 (0:00:00.184) 0:01:17.400 ****** 2025-10-09 10:19:43.426752 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'})  2025-10-09 10:19:43.426763 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'})  2025-10-09 10:19:43.426774 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:19:43.426785 | orchestrator | 2025-10-09 10:19:43.426796 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-10-09 10:19:43.426807 | orchestrator | Thursday 09 October 2025 10:19:43 +0000 (0:00:00.185) 0:01:17.585 ****** 2025-10-09 10:19:43.426818 | orchestrator | ok: [testbed-node-5] => { 2025-10-09 10:19:43.426829 | orchestrator |  "lvm_report": { 2025-10-09 10:19:43.426840 | orchestrator |  "lv": [ 2025-10-09 10:19:43.426851 | orchestrator |  { 2025-10-09 10:19:43.426862 | orchestrator |  "lv_name": "osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78", 2025-10-09 10:19:43.426879 | orchestrator |  "vg_name": "ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78" 2025-10-09 10:19:43.426891 | orchestrator |  }, 2025-10-09 10:19:43.426901 | orchestrator |  { 2025-10-09 10:19:43.426913 | orchestrator |  "lv_name": "osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4", 2025-10-09 10:19:43.426924 | orchestrator |  "vg_name": "ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4" 2025-10-09 10:19:43.426935 | orchestrator |  } 2025-10-09 10:19:43.426946 | orchestrator |  ], 2025-10-09 10:19:43.426957 | orchestrator |  "pv": [ 2025-10-09 10:19:43.426968 | orchestrator |  { 2025-10-09 10:19:43.426979 | orchestrator |  "pv_name": "/dev/sdb", 2025-10-09 10:19:43.426990 | orchestrator |  "vg_name": "ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78" 2025-10-09 10:19:43.427001 | orchestrator |  }, 2025-10-09 10:19:43.427012 | orchestrator |  { 2025-10-09 10:19:43.427023 | orchestrator |  "pv_name": "/dev/sdc", 2025-10-09 10:19:43.427034 | orchestrator |  "vg_name": "ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4" 2025-10-09 10:19:43.427045 | orchestrator |  } 2025-10-09 10:19:43.427056 | orchestrator |  ] 2025-10-09 10:19:43.427067 | orchestrator |  } 2025-10-09 10:19:43.427078 | orchestrator | } 2025-10-09 10:19:43.427089 | orchestrator | 2025-10-09 10:19:43.427100 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:19:43.427118 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-10-09 10:19:43.427130 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-10-09 10:19:43.427141 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-10-09 10:19:43.427152 | orchestrator | 2025-10-09 10:19:43.427163 | orchestrator | 2025-10-09 10:19:43.427174 | orchestrator | 2025-10-09 10:19:43.427185 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:19:43.427196 | orchestrator | Thursday 09 October 2025 10:19:43 +0000 (0:00:00.159) 0:01:17.745 ****** 2025-10-09 10:19:43.427207 | orchestrator | =============================================================================== 2025-10-09 10:19:43.427218 | orchestrator | Create block VGs -------------------------------------------------------- 5.82s 2025-10-09 10:19:43.427229 | orchestrator | Create block LVs -------------------------------------------------------- 4.15s 2025-10-09 10:19:43.427240 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.84s 2025-10-09 10:19:43.427251 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.84s 2025-10-09 10:19:43.427262 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.63s 2025-10-09 10:19:43.427273 | orchestrator | Add known partitions to the list of available block devices ------------- 1.62s 2025-10-09 10:19:43.427299 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.60s 2025-10-09 10:19:43.427311 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.57s 2025-10-09 10:19:43.427329 | orchestrator | Add known links to the list of available block devices ------------------ 1.50s 2025-10-09 10:19:43.876189 | orchestrator | Add known partitions to the list of available block devices ------------- 1.15s 2025-10-09 10:19:43.876274 | orchestrator | Print LVM report data --------------------------------------------------- 1.00s 2025-10-09 10:19:43.876319 | orchestrator | Add known partitions to the list of available block devices ------------- 0.94s 2025-10-09 10:19:43.876332 | orchestrator | Add known links to the list of available block devices ------------------ 0.86s 2025-10-09 10:19:43.876343 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.83s 2025-10-09 10:19:43.876354 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2025-10-09 10:19:43.876365 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.78s 2025-10-09 10:19:43.876376 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.74s 2025-10-09 10:19:43.876387 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2025-10-09 10:19:43.876397 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.73s 2025-10-09 10:19:43.876408 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.73s 2025-10-09 10:19:56.339024 | orchestrator | 2025-10-09 10:19:56 | INFO  | Task 258a7050-69d5-4828-8c65-7f9bec495ca4 (facts) was prepared for execution. 2025-10-09 10:19:56.339132 | orchestrator | 2025-10-09 10:19:56 | INFO  | It takes a moment until task 258a7050-69d5-4828-8c65-7f9bec495ca4 (facts) has been started and output is visible here. 2025-10-09 10:20:09.997201 | orchestrator | 2025-10-09 10:20:09.997355 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-10-09 10:20:09.997373 | orchestrator | 2025-10-09 10:20:09.997385 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-10-09 10:20:09.997397 | orchestrator | Thursday 09 October 2025 10:20:01 +0000 (0:00:00.306) 0:00:00.306 ****** 2025-10-09 10:20:09.997409 | orchestrator | ok: [testbed-manager] 2025-10-09 10:20:09.997421 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:20:09.997459 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:20:09.997471 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:20:09.997482 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:20:09.997493 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:20:09.997503 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:20:09.997514 | orchestrator | 2025-10-09 10:20:09.997525 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-10-09 10:20:09.997536 | orchestrator | Thursday 09 October 2025 10:20:02 +0000 (0:00:01.175) 0:00:01.481 ****** 2025-10-09 10:20:09.997547 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:20:09.997559 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:20:09.997570 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:20:09.997581 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:20:09.997592 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:20:09.997603 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:20:09.997613 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:20:09.997624 | orchestrator | 2025-10-09 10:20:09.997635 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-10-09 10:20:09.997646 | orchestrator | 2025-10-09 10:20:09.997656 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-10-09 10:20:09.997667 | orchestrator | Thursday 09 October 2025 10:20:03 +0000 (0:00:01.375) 0:00:02.857 ****** 2025-10-09 10:20:09.997678 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:20:09.997689 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:20:09.997700 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:20:09.997711 | orchestrator | ok: [testbed-manager] 2025-10-09 10:20:09.997721 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:20:09.997732 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:20:09.997743 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:20:09.997753 | orchestrator | 2025-10-09 10:20:09.997764 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-10-09 10:20:09.997775 | orchestrator | 2025-10-09 10:20:09.997786 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-10-09 10:20:09.997797 | orchestrator | Thursday 09 October 2025 10:20:08 +0000 (0:00:05.248) 0:00:08.105 ****** 2025-10-09 10:20:09.997808 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:20:09.997819 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:20:09.997829 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:20:09.997840 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:20:09.997851 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:20:09.997862 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:20:09.997872 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:20:09.997883 | orchestrator | 2025-10-09 10:20:09.997894 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:20:09.997905 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:20:09.997917 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:20:09.997928 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:20:09.997939 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:20:09.997950 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:20:09.997961 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:20:09.997972 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:20:09.997990 | orchestrator | 2025-10-09 10:20:09.998001 | orchestrator | 2025-10-09 10:20:09.998012 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:20:09.998076 | orchestrator | Thursday 09 October 2025 10:20:09 +0000 (0:00:00.666) 0:00:08.772 ****** 2025-10-09 10:20:09.998087 | orchestrator | =============================================================================== 2025-10-09 10:20:09.998098 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.25s 2025-10-09 10:20:09.998109 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.38s 2025-10-09 10:20:09.998120 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.18s 2025-10-09 10:20:09.998131 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.67s 2025-10-09 10:20:22.536054 | orchestrator | 2025-10-09 10:20:22 | INFO  | Task 156bcb66-8515-4158-8f56-ec0edf06f909 (frr) was prepared for execution. 2025-10-09 10:20:22.536152 | orchestrator | 2025-10-09 10:20:22 | INFO  | It takes a moment until task 156bcb66-8515-4158-8f56-ec0edf06f909 (frr) has been started and output is visible here. 2025-10-09 10:20:50.709113 | orchestrator | 2025-10-09 10:20:50.709217 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-10-09 10:20:50.709231 | orchestrator | 2025-10-09 10:20:50.709242 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-10-09 10:20:50.709252 | orchestrator | Thursday 09 October 2025 10:20:27 +0000 (0:00:00.249) 0:00:00.249 ****** 2025-10-09 10:20:50.709324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-10-09 10:20:50.709338 | orchestrator | 2025-10-09 10:20:50.709347 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-10-09 10:20:50.709356 | orchestrator | Thursday 09 October 2025 10:20:27 +0000 (0:00:00.229) 0:00:00.478 ****** 2025-10-09 10:20:50.709366 | orchestrator | changed: [testbed-manager] 2025-10-09 10:20:50.709375 | orchestrator | 2025-10-09 10:20:50.709384 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-10-09 10:20:50.709393 | orchestrator | Thursday 09 October 2025 10:20:28 +0000 (0:00:01.242) 0:00:01.721 ****** 2025-10-09 10:20:50.709402 | orchestrator | changed: [testbed-manager] 2025-10-09 10:20:50.709411 | orchestrator | 2025-10-09 10:20:50.709426 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-10-09 10:20:50.709435 | orchestrator | Thursday 09 October 2025 10:20:39 +0000 (0:00:10.958) 0:00:12.679 ****** 2025-10-09 10:20:50.709444 | orchestrator | ok: [testbed-manager] 2025-10-09 10:20:50.709454 | orchestrator | 2025-10-09 10:20:50.709463 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-10-09 10:20:50.709471 | orchestrator | Thursday 09 October 2025 10:20:40 +0000 (0:00:01.062) 0:00:13.742 ****** 2025-10-09 10:20:50.709480 | orchestrator | changed: [testbed-manager] 2025-10-09 10:20:50.709489 | orchestrator | 2025-10-09 10:20:50.709498 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-10-09 10:20:50.709507 | orchestrator | Thursday 09 October 2025 10:20:41 +0000 (0:00:00.986) 0:00:14.729 ****** 2025-10-09 10:20:50.709515 | orchestrator | ok: [testbed-manager] 2025-10-09 10:20:50.709524 | orchestrator | 2025-10-09 10:20:50.709533 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-10-09 10:20:50.709542 | orchestrator | Thursday 09 October 2025 10:20:42 +0000 (0:00:01.304) 0:00:16.034 ****** 2025-10-09 10:20:50.709551 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:20:50.709560 | orchestrator | 2025-10-09 10:20:50.709569 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-10-09 10:20:50.709578 | orchestrator | Thursday 09 October 2025 10:20:43 +0000 (0:00:00.860) 0:00:16.894 ****** 2025-10-09 10:20:50.709587 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:20:50.709595 | orchestrator | 2025-10-09 10:20:50.709605 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-10-09 10:20:50.709633 | orchestrator | Thursday 09 October 2025 10:20:43 +0000 (0:00:00.170) 0:00:17.064 ****** 2025-10-09 10:20:50.709642 | orchestrator | changed: [testbed-manager] 2025-10-09 10:20:50.709651 | orchestrator | 2025-10-09 10:20:50.709662 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-10-09 10:20:50.709671 | orchestrator | Thursday 09 October 2025 10:20:44 +0000 (0:00:01.013) 0:00:18.078 ****** 2025-10-09 10:20:50.709681 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-10-09 10:20:50.709690 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-10-09 10:20:50.709701 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-10-09 10:20:50.709711 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-10-09 10:20:50.709721 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-10-09 10:20:50.709731 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-10-09 10:20:50.709741 | orchestrator | 2025-10-09 10:20:50.709751 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-10-09 10:20:50.709761 | orchestrator | Thursday 09 October 2025 10:20:47 +0000 (0:00:02.358) 0:00:20.436 ****** 2025-10-09 10:20:50.709771 | orchestrator | ok: [testbed-manager] 2025-10-09 10:20:50.709780 | orchestrator | 2025-10-09 10:20:50.709790 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-10-09 10:20:50.709800 | orchestrator | Thursday 09 October 2025 10:20:48 +0000 (0:00:01.703) 0:00:22.140 ****** 2025-10-09 10:20:50.709810 | orchestrator | changed: [testbed-manager] 2025-10-09 10:20:50.709820 | orchestrator | 2025-10-09 10:20:50.709829 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:20:50.709839 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:20:50.709849 | orchestrator | 2025-10-09 10:20:50.709859 | orchestrator | 2025-10-09 10:20:50.709869 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:20:50.709878 | orchestrator | Thursday 09 October 2025 10:20:50 +0000 (0:00:01.425) 0:00:23.566 ****** 2025-10-09 10:20:50.709888 | orchestrator | =============================================================================== 2025-10-09 10:20:50.709898 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.96s 2025-10-09 10:20:50.709907 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.36s 2025-10-09 10:20:50.709917 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.70s 2025-10-09 10:20:50.709926 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.43s 2025-10-09 10:20:50.709950 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.30s 2025-10-09 10:20:50.709961 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.24s 2025-10-09 10:20:50.709970 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.06s 2025-10-09 10:20:50.709980 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 1.01s 2025-10-09 10:20:50.709990 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.99s 2025-10-09 10:20:50.710000 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.86s 2025-10-09 10:20:50.710010 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2025-10-09 10:20:50.710062 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.17s 2025-10-09 10:20:51.054233 | orchestrator | 2025-10-09 10:20:51.057636 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Oct 9 10:20:51 UTC 2025 2025-10-09 10:20:51.057703 | orchestrator | 2025-10-09 10:20:53.054135 | orchestrator | 2025-10-09 10:20:53 | INFO  | Collection nutshell is prepared for execution 2025-10-09 10:20:53.054232 | orchestrator | 2025-10-09 10:20:53 | INFO  | D [0] - dotfiles 2025-10-09 10:21:03.171248 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [0] - homer 2025-10-09 10:21:03.171408 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [0] - netdata 2025-10-09 10:21:03.171437 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [0] - openstackclient 2025-10-09 10:21:03.172029 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [0] - phpmyadmin 2025-10-09 10:21:03.172480 | orchestrator | 2025-10-09 10:21:03 | INFO  | A [0] - common 2025-10-09 10:21:03.177990 | orchestrator | 2025-10-09 10:21:03 | INFO  | A [1] -- loadbalancer 2025-10-09 10:21:03.178067 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [2] --- opensearch 2025-10-09 10:21:03.178400 | orchestrator | 2025-10-09 10:21:03 | INFO  | A [2] --- mariadb-ng 2025-10-09 10:21:03.178420 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [3] ---- horizon 2025-10-09 10:21:03.178431 | orchestrator | 2025-10-09 10:21:03 | INFO  | A [3] ---- keystone 2025-10-09 10:21:03.178774 | orchestrator | 2025-10-09 10:21:03 | INFO  | A [4] ----- neutron 2025-10-09 10:21:03.178870 | orchestrator | 2025-10-09 10:21:03 | INFO  | A [5] ------ wait-for-nova 2025-10-09 10:21:03.178886 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [6] ------- octavia 2025-10-09 10:21:03.180978 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [4] ----- barbican 2025-10-09 10:21:03.181007 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [4] ----- designate 2025-10-09 10:21:03.181350 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [4] ----- ironic 2025-10-09 10:21:03.181375 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [4] ----- placement 2025-10-09 10:21:03.181387 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [4] ----- magnum 2025-10-09 10:21:03.181633 | orchestrator | 2025-10-09 10:21:03 | INFO  | A [1] -- openvswitch 2025-10-09 10:21:03.181908 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [2] --- ovn 2025-10-09 10:21:03.182397 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [1] -- memcached 2025-10-09 10:21:03.182426 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [1] -- redis 2025-10-09 10:21:03.182438 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [1] -- rabbitmq-ng 2025-10-09 10:21:03.182989 | orchestrator | 2025-10-09 10:21:03 | INFO  | A [0] - kubernetes 2025-10-09 10:21:03.186130 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [1] -- kubeconfig 2025-10-09 10:21:03.186156 | orchestrator | 2025-10-09 10:21:03 | INFO  | A [1] -- copy-kubeconfig 2025-10-09 10:21:03.186423 | orchestrator | 2025-10-09 10:21:03 | INFO  | A [0] - ceph 2025-10-09 10:21:03.188961 | orchestrator | 2025-10-09 10:21:03 | INFO  | A [1] -- ceph-pools 2025-10-09 10:21:03.188984 | orchestrator | 2025-10-09 10:21:03 | INFO  | A [2] --- copy-ceph-keys 2025-10-09 10:21:03.189534 | orchestrator | 2025-10-09 10:21:03 | INFO  | A [3] ---- cephclient 2025-10-09 10:21:03.189554 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-10-09 10:21:03.189565 | orchestrator | 2025-10-09 10:21:03 | INFO  | A [4] ----- wait-for-keystone 2025-10-09 10:21:03.189576 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [5] ------ kolla-ceph-rgw 2025-10-09 10:21:03.189672 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [5] ------ glance 2025-10-09 10:21:03.190350 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [5] ------ cinder 2025-10-09 10:21:03.190372 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [5] ------ nova 2025-10-09 10:21:03.190755 | orchestrator | 2025-10-09 10:21:03 | INFO  | A [4] ----- prometheus 2025-10-09 10:21:03.191031 | orchestrator | 2025-10-09 10:21:03 | INFO  | D [5] ------ grafana 2025-10-09 10:21:03.409369 | orchestrator | 2025-10-09 10:21:03 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-10-09 10:21:03.409458 | orchestrator | 2025-10-09 10:21:03 | INFO  | Tasks are running in the background 2025-10-09 10:21:06.816466 | orchestrator | 2025-10-09 10:21:06 | INFO  | No task IDs specified, wait for all currently running tasks 2025-10-09 10:21:08.989803 | orchestrator | 2025-10-09 10:21:08 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:21:08.990211 | orchestrator | 2025-10-09 10:21:08 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:21:08.991173 | orchestrator | 2025-10-09 10:21:08 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:21:08.991955 | orchestrator | 2025-10-09 10:21:08 | INFO  | Task c250d004-5466-42b7-955e-e38cee0732f4 is in state STARTED 2025-10-09 10:21:08.992744 | orchestrator | 2025-10-09 10:21:08 | INFO  | Task bef715b4-2999-44a5-8fd4-1742c56d1833 is in state STARTED 2025-10-09 10:21:08.993554 | orchestrator | 2025-10-09 10:21:08 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state STARTED 2025-10-09 10:21:08.994312 | orchestrator | 2025-10-09 10:21:08 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:21:08.994340 | orchestrator | 2025-10-09 10:21:08 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:12.033975 | orchestrator | 2025-10-09 10:21:12 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:21:12.034117 | orchestrator | 2025-10-09 10:21:12 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:21:12.034132 | orchestrator | 2025-10-09 10:21:12 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:21:12.034144 | orchestrator | 2025-10-09 10:21:12 | INFO  | Task c250d004-5466-42b7-955e-e38cee0732f4 is in state STARTED 2025-10-09 10:21:12.038074 | orchestrator | 2025-10-09 10:21:12 | INFO  | Task bef715b4-2999-44a5-8fd4-1742c56d1833 is in state STARTED 2025-10-09 10:21:12.040243 | orchestrator | 2025-10-09 10:21:12 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state STARTED 2025-10-09 10:21:12.040847 | orchestrator | 2025-10-09 10:21:12 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:21:12.040869 | orchestrator | 2025-10-09 10:21:12 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:15.090557 | orchestrator | 2025-10-09 10:21:15 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:21:15.090660 | orchestrator | 2025-10-09 10:21:15 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:21:15.090924 | orchestrator | 2025-10-09 10:21:15 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:21:15.091417 | orchestrator | 2025-10-09 10:21:15 | INFO  | Task c250d004-5466-42b7-955e-e38cee0732f4 is in state STARTED 2025-10-09 10:21:15.093033 | orchestrator | 2025-10-09 10:21:15 | INFO  | Task bef715b4-2999-44a5-8fd4-1742c56d1833 is in state STARTED 2025-10-09 10:21:15.093442 | orchestrator | 2025-10-09 10:21:15 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state STARTED 2025-10-09 10:21:15.095482 | orchestrator | 2025-10-09 10:21:15 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:21:15.095507 | orchestrator | 2025-10-09 10:21:15 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:18.154879 | orchestrator | 2025-10-09 10:21:18 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:21:18.160428 | orchestrator | 2025-10-09 10:21:18 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:21:18.162551 | orchestrator | 2025-10-09 10:21:18 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:21:18.168931 | orchestrator | 2025-10-09 10:21:18 | INFO  | Task c250d004-5466-42b7-955e-e38cee0732f4 is in state STARTED 2025-10-09 10:21:18.172935 | orchestrator | 2025-10-09 10:21:18 | INFO  | Task bef715b4-2999-44a5-8fd4-1742c56d1833 is in state STARTED 2025-10-09 10:21:18.176434 | orchestrator | 2025-10-09 10:21:18 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state STARTED 2025-10-09 10:21:18.179356 | orchestrator | 2025-10-09 10:21:18 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:21:18.179379 | orchestrator | 2025-10-09 10:21:18 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:21.252853 | orchestrator | 2025-10-09 10:21:21 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:21:21.253061 | orchestrator | 2025-10-09 10:21:21 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:21:21.253612 | orchestrator | 2025-10-09 10:21:21 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:21:21.256224 | orchestrator | 2025-10-09 10:21:21 | INFO  | Task c250d004-5466-42b7-955e-e38cee0732f4 is in state STARTED 2025-10-09 10:21:21.256789 | orchestrator | 2025-10-09 10:21:21 | INFO  | Task bef715b4-2999-44a5-8fd4-1742c56d1833 is in state STARTED 2025-10-09 10:21:21.257408 | orchestrator | 2025-10-09 10:21:21 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state STARTED 2025-10-09 10:21:21.258210 | orchestrator | 2025-10-09 10:21:21 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:21:21.258250 | orchestrator | 2025-10-09 10:21:21 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:24.403483 | orchestrator | 2025-10-09 10:21:24 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:21:24.403605 | orchestrator | 2025-10-09 10:21:24 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:21:24.403622 | orchestrator | 2025-10-09 10:21:24 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:21:24.403635 | orchestrator | 2025-10-09 10:21:24 | INFO  | Task c250d004-5466-42b7-955e-e38cee0732f4 is in state STARTED 2025-10-09 10:21:24.403646 | orchestrator | 2025-10-09 10:21:24 | INFO  | Task bef715b4-2999-44a5-8fd4-1742c56d1833 is in state STARTED 2025-10-09 10:21:24.403657 | orchestrator | 2025-10-09 10:21:24 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state STARTED 2025-10-09 10:21:24.403669 | orchestrator | 2025-10-09 10:21:24 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:21:24.403680 | orchestrator | 2025-10-09 10:21:24 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:27.725928 | orchestrator | 2025-10-09 10:21:27 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:21:27.728574 | orchestrator | 2025-10-09 10:21:27 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:21:27.729028 | orchestrator | 2025-10-09 10:21:27 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:21:27.733912 | orchestrator | 2025-10-09 10:21:27 | INFO  | Task c250d004-5466-42b7-955e-e38cee0732f4 is in state STARTED 2025-10-09 10:21:27.734927 | orchestrator | 2025-10-09 10:21:27 | INFO  | Task bef715b4-2999-44a5-8fd4-1742c56d1833 is in state STARTED 2025-10-09 10:21:27.737503 | orchestrator | 2025-10-09 10:21:27 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state STARTED 2025-10-09 10:21:27.741386 | orchestrator | 2025-10-09 10:21:27 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:21:27.741416 | orchestrator | 2025-10-09 10:21:27 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:30.867023 | orchestrator | 2025-10-09 10:21:30 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:21:30.868444 | orchestrator | 2025-10-09 10:21:30 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:21:30.869492 | orchestrator | 2025-10-09 10:21:30 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:21:30.871426 | orchestrator | 2025-10-09 10:21:30 | INFO  | Task c250d004-5466-42b7-955e-e38cee0732f4 is in state STARTED 2025-10-09 10:21:30.872909 | orchestrator | 2025-10-09 10:21:30 | INFO  | Task bef715b4-2999-44a5-8fd4-1742c56d1833 is in state STARTED 2025-10-09 10:21:30.875598 | orchestrator | 2025-10-09 10:21:30 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state STARTED 2025-10-09 10:21:30.878077 | orchestrator | 2025-10-09 10:21:30 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:21:30.878107 | orchestrator | 2025-10-09 10:21:30 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:34.109520 | orchestrator | 2025-10-09 10:21:34 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:21:34.113580 | orchestrator | 2025-10-09 10:21:34 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:21:34.127057 | orchestrator | 2025-10-09 10:21:34 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:21:34.132291 | orchestrator | 2025-10-09 10:21:34 | INFO  | Task c250d004-5466-42b7-955e-e38cee0732f4 is in state STARTED 2025-10-09 10:21:34.136574 | orchestrator | 2025-10-09 10:21:34 | INFO  | Task bef715b4-2999-44a5-8fd4-1742c56d1833 is in state SUCCESS 2025-10-09 10:21:34.137241 | orchestrator | 2025-10-09 10:21:34.137319 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-10-09 10:21:34.137333 | orchestrator | 2025-10-09 10:21:34.137345 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-10-09 10:21:34.137356 | orchestrator | Thursday 09 October 2025 10:21:18 +0000 (0:00:00.764) 0:00:00.764 ****** 2025-10-09 10:21:34.137367 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:21:34.137379 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:21:34.137390 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:21:34.137401 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:21:34.137411 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:21:34.137422 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:21:34.137433 | orchestrator | changed: [testbed-manager] 2025-10-09 10:21:34.137443 | orchestrator | 2025-10-09 10:21:34.137455 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-10-09 10:21:34.137467 | orchestrator | Thursday 09 October 2025 10:21:22 +0000 (0:00:03.787) 0:00:04.551 ****** 2025-10-09 10:21:34.137478 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-10-09 10:21:34.137489 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-10-09 10:21:34.137500 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-10-09 10:21:34.137510 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-10-09 10:21:34.137521 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-10-09 10:21:34.137557 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-10-09 10:21:34.137568 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-10-09 10:21:34.137579 | orchestrator | 2025-10-09 10:21:34.137590 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-10-09 10:21:34.137601 | orchestrator | Thursday 09 October 2025 10:21:24 +0000 (0:00:01.648) 0:00:06.200 ****** 2025-10-09 10:21:34.137623 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-10-09 10:21:23.127657', 'end': '2025-10-09 10:21:23.133071', 'delta': '0:00:00.005414', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-10-09 10:21:34.137639 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-10-09 10:21:23.425444', 'end': '2025-10-09 10:21:23.435325', 'delta': '0:00:00.009881', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-10-09 10:21:34.137651 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-10-09 10:21:23.331928', 'end': '2025-10-09 10:21:23.338261', 'delta': '0:00:00.006333', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-10-09 10:21:34.137969 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-10-09 10:21:24.065725', 'end': '2025-10-09 10:21:24.074435', 'delta': '0:00:00.008710', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-10-09 10:21:34.137985 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-10-09 10:21:23.143864', 'end': '2025-10-09 10:21:23.161404', 'delta': '0:00:00.017540', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-10-09 10:21:34.138054 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-10-09 10:21:23.664720', 'end': '2025-10-09 10:21:23.674079', 'delta': '0:00:00.009359', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-10-09 10:21:34.138069 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-10-09 10:21:23.918830', 'end': '2025-10-09 10:21:23.927136', 'delta': '0:00:00.008306', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-10-09 10:21:34.138080 | orchestrator | 2025-10-09 10:21:34.138091 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-10-09 10:21:34.138103 | orchestrator | Thursday 09 October 2025 10:21:26 +0000 (0:00:02.010) 0:00:08.210 ****** 2025-10-09 10:21:34.138113 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-10-09 10:21:34.138125 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-10-09 10:21:34.138135 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-10-09 10:21:34.138146 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-10-09 10:21:34.138157 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-10-09 10:21:34.138168 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-10-09 10:21:34.138178 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-10-09 10:21:34.138189 | orchestrator | 2025-10-09 10:21:34.138200 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-10-09 10:21:34.138210 | orchestrator | Thursday 09 October 2025 10:21:28 +0000 (0:00:02.534) 0:00:10.745 ****** 2025-10-09 10:21:34.138221 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-10-09 10:21:34.138232 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-10-09 10:21:34.138243 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-10-09 10:21:34.138253 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-10-09 10:21:34.138292 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-10-09 10:21:34.138305 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-10-09 10:21:34.138316 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-10-09 10:21:34.138327 | orchestrator | 2025-10-09 10:21:34.138338 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:21:34.138366 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:21:34.138381 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:21:34.138392 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:21:34.138403 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:21:34.138414 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:21:34.138425 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:21:34.138436 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:21:34.138447 | orchestrator | 2025-10-09 10:21:34.138458 | orchestrator | 2025-10-09 10:21:34.138469 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:21:34.138480 | orchestrator | Thursday 09 October 2025 10:21:33 +0000 (0:00:04.137) 0:00:14.882 ****** 2025-10-09 10:21:34.138491 | orchestrator | =============================================================================== 2025-10-09 10:21:34.138502 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.14s 2025-10-09 10:21:34.138513 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.79s 2025-10-09 10:21:34.138524 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.53s 2025-10-09 10:21:34.138535 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.01s 2025-10-09 10:21:34.138546 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.65s 2025-10-09 10:21:34.144663 | orchestrator | 2025-10-09 10:21:34 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state STARTED 2025-10-09 10:21:34.153957 | orchestrator | 2025-10-09 10:21:34 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:21:34.158927 | orchestrator | 2025-10-09 10:21:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:37.247722 | orchestrator | 2025-10-09 10:21:37 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:21:37.248769 | orchestrator | 2025-10-09 10:21:37 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:21:37.251331 | orchestrator | 2025-10-09 10:21:37 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:21:37.252581 | orchestrator | 2025-10-09 10:21:37 | INFO  | Task c250d004-5466-42b7-955e-e38cee0732f4 is in state STARTED 2025-10-09 10:21:37.254344 | orchestrator | 2025-10-09 10:21:37 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state STARTED 2025-10-09 10:21:37.256488 | orchestrator | 2025-10-09 10:21:37 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:21:37.257625 | orchestrator | 2025-10-09 10:21:37 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:21:37.258069 | orchestrator | 2025-10-09 10:21:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:40.501130 | orchestrator | 2025-10-09 10:21:40 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:21:40.501317 | orchestrator | 2025-10-09 10:21:40 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:21:40.501373 | orchestrator | 2025-10-09 10:21:40 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:21:40.501386 | orchestrator | 2025-10-09 10:21:40 | INFO  | Task c250d004-5466-42b7-955e-e38cee0732f4 is in state STARTED 2025-10-09 10:21:40.501397 | orchestrator | 2025-10-09 10:21:40 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state STARTED 2025-10-09 10:21:40.501408 | orchestrator | 2025-10-09 10:21:40 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:21:40.501420 | orchestrator | 2025-10-09 10:21:40 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:21:40.501432 | orchestrator | 2025-10-09 10:21:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:43.659394 | orchestrator | 2025-10-09 10:21:43 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:21:43.909341 | orchestrator | 2025-10-09 10:21:43 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:21:43.909418 | orchestrator | 2025-10-09 10:21:43 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:21:43.909431 | orchestrator | 2025-10-09 10:21:43 | INFO  | Task c250d004-5466-42b7-955e-e38cee0732f4 is in state STARTED 2025-10-09 10:21:43.909443 | orchestrator | 2025-10-09 10:21:43 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state STARTED 2025-10-09 10:21:43.909454 | orchestrator | 2025-10-09 10:21:43 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:21:43.909465 | orchestrator | 2025-10-09 10:21:43 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:21:43.909478 | orchestrator | 2025-10-09 10:21:43 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:46.720914 | orchestrator | 2025-10-09 10:21:46 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:21:46.720990 | orchestrator | 2025-10-09 10:21:46 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:21:46.721643 | orchestrator | 2025-10-09 10:21:46 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:21:46.722101 | orchestrator | 2025-10-09 10:21:46 | INFO  | Task c250d004-5466-42b7-955e-e38cee0732f4 is in state STARTED 2025-10-09 10:21:46.723147 | orchestrator | 2025-10-09 10:21:46 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state STARTED 2025-10-09 10:21:46.724927 | orchestrator | 2025-10-09 10:21:46 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:21:46.726545 | orchestrator | 2025-10-09 10:21:46 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:21:46.726571 | orchestrator | 2025-10-09 10:21:46 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:49.830715 | orchestrator | 2025-10-09 10:21:49 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:21:49.833951 | orchestrator | 2025-10-09 10:21:49 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:21:49.837461 | orchestrator | 2025-10-09 10:21:49 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:21:49.837935 | orchestrator | 2025-10-09 10:21:49 | INFO  | Task c250d004-5466-42b7-955e-e38cee0732f4 is in state STARTED 2025-10-09 10:21:49.838590 | orchestrator | 2025-10-09 10:21:49 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state STARTED 2025-10-09 10:21:49.839317 | orchestrator | 2025-10-09 10:21:49 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:21:49.840070 | orchestrator | 2025-10-09 10:21:49[0m | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:21:49.840097 | orchestrator | 2025-10-09 10:21:49 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:52.954227 | orchestrator | 2025-10-09 10:21:52 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:21:52.954536 | orchestrator | 2025-10-09 10:21:52 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:21:52.955952 | orchestrator | 2025-10-09 10:21:52 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:21:52.956458 | orchestrator | 2025-10-09 10:21:52 | INFO  | Task c250d004-5466-42b7-955e-e38cee0732f4 is in state STARTED 2025-10-09 10:21:52.957742 | orchestrator | 2025-10-09 10:21:52 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state STARTED 2025-10-09 10:21:52.960084 | orchestrator | 2025-10-09 10:21:52 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:21:52.961625 | orchestrator | 2025-10-09 10:21:52 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:21:52.961647 | orchestrator | 2025-10-09 10:21:52 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:56.006371 | orchestrator | 2025-10-09 10:21:56 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:21:56.006484 | orchestrator | 2025-10-09 10:21:56 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:21:56.008937 | orchestrator | 2025-10-09 10:21:56 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:21:56.010994 | orchestrator | 2025-10-09 10:21:56 | INFO  | Task c250d004-5466-42b7-955e-e38cee0732f4 is in state SUCCESS 2025-10-09 10:21:56.013575 | orchestrator | 2025-10-09 10:21:56 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state STARTED 2025-10-09 10:21:56.014865 | orchestrator | 2025-10-09 10:21:56 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:21:56.014888 | orchestrator | 2025-10-09 10:21:56 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:21:56.112159 | orchestrator | 2025-10-09 10:21:56 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:21:59.113470 | orchestrator | 2025-10-09 10:21:59 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:21:59.114663 | orchestrator | 2025-10-09 10:21:59 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:21:59.129865 | orchestrator | 2025-10-09 10:21:59 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:21:59.133181 | orchestrator | 2025-10-09 10:21:59 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state STARTED 2025-10-09 10:21:59.133777 | orchestrator | 2025-10-09 10:21:59 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:21:59.135036 | orchestrator | 2025-10-09 10:21:59 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:21:59.135053 | orchestrator | 2025-10-09 10:21:59 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:02.208063 | orchestrator | 2025-10-09 10:22:02 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:22:02.210141 | orchestrator | 2025-10-09 10:22:02 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:22:02.210168 | orchestrator | 2025-10-09 10:22:02 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:22:02.213142 | orchestrator | 2025-10-09 10:22:02 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state STARTED 2025-10-09 10:22:02.213163 | orchestrator | 2025-10-09 10:22:02 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:22:02.213914 | orchestrator | 2025-10-09 10:22:02 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:22:02.213936 | orchestrator | 2025-10-09 10:22:02 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:05.300543 | orchestrator | 2025-10-09 10:22:05 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:22:05.301111 | orchestrator | 2025-10-09 10:22:05 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:22:05.301858 | orchestrator | 2025-10-09 10:22:05 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:22:05.302709 | orchestrator | 2025-10-09 10:22:05 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state STARTED 2025-10-09 10:22:05.303776 | orchestrator | 2025-10-09 10:22:05 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:22:05.305103 | orchestrator | 2025-10-09 10:22:05 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:22:05.305143 | orchestrator | 2025-10-09 10:22:05 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:08.380762 | orchestrator | 2025-10-09 10:22:08 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:22:08.381729 | orchestrator | 2025-10-09 10:22:08 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:22:08.383570 | orchestrator | 2025-10-09 10:22:08 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:22:08.384801 | orchestrator | 2025-10-09 10:22:08 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state STARTED 2025-10-09 10:22:08.386810 | orchestrator | 2025-10-09 10:22:08 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:22:08.388421 | orchestrator | 2025-10-09 10:22:08 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:22:08.388442 | orchestrator | 2025-10-09 10:22:08 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:11.510945 | orchestrator | 2025-10-09 10:22:11 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:22:11.511065 | orchestrator | 2025-10-09 10:22:11 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:22:11.511087 | orchestrator | 2025-10-09 10:22:11 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:22:11.511107 | orchestrator | 2025-10-09 10:22:11 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state STARTED 2025-10-09 10:22:11.511124 | orchestrator | 2025-10-09 10:22:11 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:22:11.511143 | orchestrator | 2025-10-09 10:22:11 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:22:11.511161 | orchestrator | 2025-10-09 10:22:11 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:14.541080 | orchestrator | 2025-10-09 10:22:14 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:22:14.541705 | orchestrator | 2025-10-09 10:22:14 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:22:14.542808 | orchestrator | 2025-10-09 10:22:14 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:22:14.543423 | orchestrator | 2025-10-09 10:22:14 | INFO  | Task 91880c7d-0975-4a0a-9860-ebb4550d43e6 is in state SUCCESS 2025-10-09 10:22:14.544214 | orchestrator | 2025-10-09 10:22:14 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:22:14.545129 | orchestrator | 2025-10-09 10:22:14 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:22:14.545164 | orchestrator | 2025-10-09 10:22:14 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:17.711679 | orchestrator | 2025-10-09 10:22:17 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:22:17.711762 | orchestrator | 2025-10-09 10:22:17 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:22:17.711774 | orchestrator | 2025-10-09 10:22:17 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:22:17.711786 | orchestrator | 2025-10-09 10:22:17 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:22:17.714425 | orchestrator | 2025-10-09 10:22:17 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:22:17.714450 | orchestrator | 2025-10-09 10:22:17 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:20.810200 | orchestrator | 2025-10-09 10:22:20 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:22:20.810419 | orchestrator | 2025-10-09 10:22:20 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:22:20.810915 | orchestrator | 2025-10-09 10:22:20 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:22:20.811662 | orchestrator | 2025-10-09 10:22:20 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:22:20.812250 | orchestrator | 2025-10-09 10:22:20 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:22:20.812297 | orchestrator | 2025-10-09 10:22:20 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:23.879566 | orchestrator | 2025-10-09 10:22:23 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:22:23.880550 | orchestrator | 2025-10-09 10:22:23 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:22:23.882566 | orchestrator | 2025-10-09 10:22:23 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:22:23.884852 | orchestrator | 2025-10-09 10:22:23 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:22:23.886092 | orchestrator | 2025-10-09 10:22:23 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:22:23.888544 | orchestrator | 2025-10-09 10:22:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:26.945743 | orchestrator | 2025-10-09 10:22:26 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:22:26.946633 | orchestrator | 2025-10-09 10:22:26 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:22:26.947979 | orchestrator | 2025-10-09 10:22:26 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:22:26.949893 | orchestrator | 2025-10-09 10:22:26 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:22:26.950429 | orchestrator | 2025-10-09 10:22:26 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:22:26.950929 | orchestrator | 2025-10-09 10:22:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:30.005458 | orchestrator | 2025-10-09 10:22:30 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:22:30.007644 | orchestrator | 2025-10-09 10:22:30 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:22:30.008822 | orchestrator | 2025-10-09 10:22:30 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state STARTED 2025-10-09 10:22:30.010232 | orchestrator | 2025-10-09 10:22:30 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:22:30.011618 | orchestrator | 2025-10-09 10:22:30 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:22:30.011649 | orchestrator | 2025-10-09 10:22:30 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:33.069358 | orchestrator | 2025-10-09 10:22:33 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:22:33.070758 | orchestrator | 2025-10-09 10:22:33 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:22:33.075683 | orchestrator | 2025-10-09 10:22:33 | INFO  | Task c3da5746-511c-4c08-9239-4b811c6f9f1d is in state SUCCESS 2025-10-09 10:22:33.075709 | orchestrator | 2025-10-09 10:22:33 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:22:33.077314 | orchestrator | 2025-10-09 10:22:33.077354 | orchestrator | 2025-10-09 10:22:33.077366 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-10-09 10:22:33.077378 | orchestrator | 2025-10-09 10:22:33.077390 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-10-09 10:22:33.077401 | orchestrator | Thursday 09 October 2025 10:21:17 +0000 (0:00:00.588) 0:00:00.588 ****** 2025-10-09 10:22:33.077412 | orchestrator | ok: [testbed-manager] => { 2025-10-09 10:22:33.077425 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-10-09 10:22:33.077438 | orchestrator | } 2025-10-09 10:22:33.077449 | orchestrator | 2025-10-09 10:22:33.077460 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-10-09 10:22:33.077471 | orchestrator | Thursday 09 October 2025 10:21:17 +0000 (0:00:00.198) 0:00:00.787 ****** 2025-10-09 10:22:33.077482 | orchestrator | ok: [testbed-manager] 2025-10-09 10:22:33.077494 | orchestrator | 2025-10-09 10:22:33.077504 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-10-09 10:22:33.077515 | orchestrator | Thursday 09 October 2025 10:21:19 +0000 (0:00:01.705) 0:00:02.492 ****** 2025-10-09 10:22:33.077526 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-10-09 10:22:33.077537 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-10-09 10:22:33.077548 | orchestrator | 2025-10-09 10:22:33.077559 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-10-09 10:22:33.077570 | orchestrator | Thursday 09 October 2025 10:21:21 +0000 (0:00:01.881) 0:00:04.373 ****** 2025-10-09 10:22:33.077581 | orchestrator | changed: [testbed-manager] 2025-10-09 10:22:33.077591 | orchestrator | 2025-10-09 10:22:33.077602 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-10-09 10:22:33.077613 | orchestrator | Thursday 09 October 2025 10:21:24 +0000 (0:00:03.149) 0:00:07.523 ****** 2025-10-09 10:22:33.077624 | orchestrator | changed: [testbed-manager] 2025-10-09 10:22:33.077634 | orchestrator | 2025-10-09 10:22:33.077646 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-10-09 10:22:33.077689 | orchestrator | Thursday 09 October 2025 10:21:25 +0000 (0:00:01.309) 0:00:08.833 ****** 2025-10-09 10:22:33.077701 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-10-09 10:22:33.077739 | orchestrator | ok: [testbed-manager] 2025-10-09 10:22:33.077750 | orchestrator | 2025-10-09 10:22:33.077761 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-10-09 10:22:33.077792 | orchestrator | Thursday 09 October 2025 10:21:51 +0000 (0:00:25.673) 0:00:34.506 ****** 2025-10-09 10:22:33.077804 | orchestrator | changed: [testbed-manager] 2025-10-09 10:22:33.077815 | orchestrator | 2025-10-09 10:22:33.077826 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:22:33.077837 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:22:33.077850 | orchestrator | 2025-10-09 10:22:33.077862 | orchestrator | 2025-10-09 10:22:33.077874 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:22:33.077887 | orchestrator | Thursday 09 October 2025 10:21:54 +0000 (0:00:02.937) 0:00:37.444 ****** 2025-10-09 10:22:33.077899 | orchestrator | =============================================================================== 2025-10-09 10:22:33.077911 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.67s 2025-10-09 10:22:33.077924 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.15s 2025-10-09 10:22:33.077936 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.94s 2025-10-09 10:22:33.077948 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.88s 2025-10-09 10:22:33.077961 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.71s 2025-10-09 10:22:33.077973 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.31s 2025-10-09 10:22:33.077986 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.20s 2025-10-09 10:22:33.077998 | orchestrator | 2025-10-09 10:22:33.078010 | orchestrator | 2025-10-09 10:22:33.078080 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-10-09 10:22:33.078093 | orchestrator | 2025-10-09 10:22:33.078105 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-10-09 10:22:33.078117 | orchestrator | Thursday 09 October 2025 10:21:18 +0000 (0:00:00.781) 0:00:00.781 ****** 2025-10-09 10:22:33.078130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-10-09 10:22:33.078144 | orchestrator | 2025-10-09 10:22:33.078156 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-10-09 10:22:33.078169 | orchestrator | Thursday 09 October 2025 10:21:18 +0000 (0:00:00.379) 0:00:01.161 ****** 2025-10-09 10:22:33.078182 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-10-09 10:22:33.078194 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-10-09 10:22:33.078208 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-10-09 10:22:33.078221 | orchestrator | 2025-10-09 10:22:33.078232 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-10-09 10:22:33.078242 | orchestrator | Thursday 09 October 2025 10:21:20 +0000 (0:00:02.218) 0:00:03.379 ****** 2025-10-09 10:22:33.078277 | orchestrator | changed: [testbed-manager] 2025-10-09 10:22:33.078288 | orchestrator | 2025-10-09 10:22:33.078299 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-10-09 10:22:33.078311 | orchestrator | Thursday 09 October 2025 10:21:22 +0000 (0:00:01.622) 0:00:05.001 ****** 2025-10-09 10:22:33.078335 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-10-09 10:22:33.078347 | orchestrator | ok: [testbed-manager] 2025-10-09 10:22:33.078358 | orchestrator | 2025-10-09 10:22:33.078369 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-10-09 10:22:33.078380 | orchestrator | Thursday 09 October 2025 10:21:59 +0000 (0:00:37.297) 0:00:42.299 ****** 2025-10-09 10:22:33.078392 | orchestrator | changed: [testbed-manager] 2025-10-09 10:22:33.078403 | orchestrator | 2025-10-09 10:22:33.078414 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-10-09 10:22:33.078425 | orchestrator | Thursday 09 October 2025 10:22:02 +0000 (0:00:02.327) 0:00:44.626 ****** 2025-10-09 10:22:33.078444 | orchestrator | ok: [testbed-manager] 2025-10-09 10:22:33.078455 | orchestrator | 2025-10-09 10:22:33.078466 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-10-09 10:22:33.078477 | orchestrator | Thursday 09 October 2025 10:22:02 +0000 (0:00:00.816) 0:00:45.443 ****** 2025-10-09 10:22:33.078488 | orchestrator | changed: [testbed-manager] 2025-10-09 10:22:33.078499 | orchestrator | 2025-10-09 10:22:33.078510 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-10-09 10:22:33.078521 | orchestrator | Thursday 09 October 2025 10:22:06 +0000 (0:00:03.230) 0:00:48.673 ****** 2025-10-09 10:22:33.078532 | orchestrator | changed: [testbed-manager] 2025-10-09 10:22:33.078543 | orchestrator | 2025-10-09 10:22:33.078554 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-10-09 10:22:33.078570 | orchestrator | Thursday 09 October 2025 10:22:08 +0000 (0:00:02.049) 0:00:50.722 ****** 2025-10-09 10:22:33.078581 | orchestrator | changed: [testbed-manager] 2025-10-09 10:22:33.078592 | orchestrator | 2025-10-09 10:22:33.078603 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-10-09 10:22:33.078614 | orchestrator | Thursday 09 October 2025 10:22:09 +0000 (0:00:01.046) 0:00:51.773 ****** 2025-10-09 10:22:33.078625 | orchestrator | ok: [testbed-manager] 2025-10-09 10:22:33.078636 | orchestrator | 2025-10-09 10:22:33.078647 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:22:33.078658 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:22:33.078669 | orchestrator | 2025-10-09 10:22:33.078680 | orchestrator | 2025-10-09 10:22:33.078691 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:22:33.078702 | orchestrator | Thursday 09 October 2025 10:22:09 +0000 (0:00:00.641) 0:00:52.415 ****** 2025-10-09 10:22:33.078713 | orchestrator | =============================================================================== 2025-10-09 10:22:33.078724 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 37.30s 2025-10-09 10:22:33.078735 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.23s 2025-10-09 10:22:33.078746 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.33s 2025-10-09 10:22:33.078757 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.22s 2025-10-09 10:22:33.078768 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.05s 2025-10-09 10:22:33.078779 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.62s 2025-10-09 10:22:33.078790 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.05s 2025-10-09 10:22:33.078801 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.82s 2025-10-09 10:22:33.078812 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.64s 2025-10-09 10:22:33.078823 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.38s 2025-10-09 10:22:33.078834 | orchestrator | 2025-10-09 10:22:33.078845 | orchestrator | 2025-10-09 10:22:33.078856 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:22:33.078867 | orchestrator | 2025-10-09 10:22:33.078878 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:22:33.078889 | orchestrator | Thursday 09 October 2025 10:21:18 +0000 (0:00:00.560) 0:00:00.560 ****** 2025-10-09 10:22:33.078899 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-10-09 10:22:33.078910 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-10-09 10:22:33.078921 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-10-09 10:22:33.078932 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-10-09 10:22:33.078943 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-10-09 10:22:33.078960 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-10-09 10:22:33.078972 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-10-09 10:22:33.078983 | orchestrator | 2025-10-09 10:22:33.078994 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-10-09 10:22:33.079005 | orchestrator | 2025-10-09 10:22:33.079016 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-10-09 10:22:33.079027 | orchestrator | Thursday 09 October 2025 10:21:20 +0000 (0:00:01.743) 0:00:02.304 ****** 2025-10-09 10:22:33.079051 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:22:33.079065 | orchestrator | 2025-10-09 10:22:33.079076 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-10-09 10:22:33.079087 | orchestrator | Thursday 09 October 2025 10:21:21 +0000 (0:00:01.370) 0:00:03.674 ****** 2025-10-09 10:22:33.079098 | orchestrator | ok: [testbed-manager] 2025-10-09 10:22:33.079109 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:22:33.079120 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:22:33.079131 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:22:33.079142 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:22:33.079158 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:22:33.079170 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:22:33.079181 | orchestrator | 2025-10-09 10:22:33.079192 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-10-09 10:22:33.079203 | orchestrator | Thursday 09 October 2025 10:21:23 +0000 (0:00:01.973) 0:00:05.648 ****** 2025-10-09 10:22:33.079214 | orchestrator | ok: [testbed-manager] 2025-10-09 10:22:33.079225 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:22:33.079235 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:22:33.079246 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:22:33.079276 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:22:33.079287 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:22:33.079297 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:22:33.079308 | orchestrator | 2025-10-09 10:22:33.079319 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-10-09 10:22:33.079330 | orchestrator | Thursday 09 October 2025 10:21:26 +0000 (0:00:03.252) 0:00:08.900 ****** 2025-10-09 10:22:33.079341 | orchestrator | changed: [testbed-manager] 2025-10-09 10:22:33.079352 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:22:33.079363 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:22:33.079374 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:22:33.079384 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:22:33.079395 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:22:33.079406 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:22:33.079416 | orchestrator | 2025-10-09 10:22:33.079427 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-10-09 10:22:33.079443 | orchestrator | Thursday 09 October 2025 10:21:30 +0000 (0:00:03.498) 0:00:12.399 ****** 2025-10-09 10:22:33.079454 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:22:33.079465 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:22:33.079476 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:22:33.079486 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:22:33.079497 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:22:33.079508 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:22:33.079518 | orchestrator | changed: [testbed-manager] 2025-10-09 10:22:33.079529 | orchestrator | 2025-10-09 10:22:33.079540 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-10-09 10:22:33.079551 | orchestrator | Thursday 09 October 2025 10:21:43 +0000 (0:00:13.137) 0:00:25.536 ****** 2025-10-09 10:22:33.079562 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:22:33.079573 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:22:33.079584 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:22:33.079600 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:22:33.079611 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:22:33.079622 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:22:33.079633 | orchestrator | changed: [testbed-manager] 2025-10-09 10:22:33.079643 | orchestrator | 2025-10-09 10:22:33.079654 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-10-09 10:22:33.079669 | orchestrator | Thursday 09 October 2025 10:22:06 +0000 (0:00:23.292) 0:00:48.829 ****** 2025-10-09 10:22:33.079689 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:22:33.079705 | orchestrator | 2025-10-09 10:22:33.079717 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-10-09 10:22:33.079728 | orchestrator | Thursday 09 October 2025 10:22:08 +0000 (0:00:01.976) 0:00:50.806 ****** 2025-10-09 10:22:33.079739 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-10-09 10:22:33.079750 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-10-09 10:22:33.079761 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-10-09 10:22:33.079772 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-10-09 10:22:33.079783 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-10-09 10:22:33.079794 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-10-09 10:22:33.079804 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-10-09 10:22:33.079815 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-10-09 10:22:33.079826 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-10-09 10:22:33.079837 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-10-09 10:22:33.079848 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-10-09 10:22:33.079859 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-10-09 10:22:33.079870 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-10-09 10:22:33.079880 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-10-09 10:22:33.079891 | orchestrator | 2025-10-09 10:22:33.079902 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-10-09 10:22:33.079914 | orchestrator | Thursday 09 October 2025 10:22:16 +0000 (0:00:07.219) 0:00:58.025 ****** 2025-10-09 10:22:33.079925 | orchestrator | ok: [testbed-manager] 2025-10-09 10:22:33.079936 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:22:33.079946 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:22:33.079957 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:22:33.079968 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:22:33.079979 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:22:33.079990 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:22:33.080001 | orchestrator | 2025-10-09 10:22:33.080012 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-10-09 10:22:33.080023 | orchestrator | Thursday 09 October 2025 10:22:17 +0000 (0:00:01.439) 0:00:59.465 ****** 2025-10-09 10:22:33.080034 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:22:33.080045 | orchestrator | changed: [testbed-manager] 2025-10-09 10:22:33.080056 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:22:33.080066 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:22:33.080077 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:22:33.080088 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:22:33.080099 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:22:33.080110 | orchestrator | 2025-10-09 10:22:33.080121 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-10-09 10:22:33.080138 | orchestrator | Thursday 09 October 2025 10:22:19 +0000 (0:00:02.175) 0:01:01.640 ****** 2025-10-09 10:22:33.080149 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:22:33.080161 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:22:33.080172 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:22:33.080188 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:22:33.080199 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:22:33.080210 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:22:33.080221 | orchestrator | ok: [testbed-manager] 2025-10-09 10:22:33.080232 | orchestrator | 2025-10-09 10:22:33.080243 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-10-09 10:22:33.080273 | orchestrator | Thursday 09 October 2025 10:22:21 +0000 (0:00:01.520) 0:01:03.161 ****** 2025-10-09 10:22:33.080284 | orchestrator | ok: [testbed-manager] 2025-10-09 10:22:33.080295 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:22:33.080306 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:22:33.080317 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:22:33.080328 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:22:33.080339 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:22:33.080349 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:22:33.080360 | orchestrator | 2025-10-09 10:22:33.080371 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-10-09 10:22:33.080382 | orchestrator | Thursday 09 October 2025 10:22:23 +0000 (0:00:01.828) 0:01:04.991 ****** 2025-10-09 10:22:33.080393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-10-09 10:22:33.080410 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:22:33.080422 | orchestrator | 2025-10-09 10:22:33.080433 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-10-09 10:22:33.080444 | orchestrator | Thursday 09 October 2025 10:22:25 +0000 (0:00:02.179) 0:01:07.170 ****** 2025-10-09 10:22:33.080455 | orchestrator | changed: [testbed-manager] 2025-10-09 10:22:33.080466 | orchestrator | 2025-10-09 10:22:33.080476 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-10-09 10:22:33.080487 | orchestrator | Thursday 09 October 2025 10:22:28 +0000 (0:00:03.378) 0:01:10.549 ****** 2025-10-09 10:22:33.080498 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:22:33.080509 | orchestrator | changed: [testbed-manager] 2025-10-09 10:22:33.080520 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:22:33.080531 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:22:33.080541 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:22:33.080552 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:22:33.080563 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:22:33.080574 | orchestrator | 2025-10-09 10:22:33.080585 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:22:33.080596 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:22:33.080607 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:22:33.080618 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:22:33.080629 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:22:33.080641 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:22:33.080651 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:22:33.080662 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:22:33.080673 | orchestrator | 2025-10-09 10:22:33.080684 | orchestrator | 2025-10-09 10:22:33.080706 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:22:33.080717 | orchestrator | Thursday 09 October 2025 10:22:31 +0000 (0:00:03.199) 0:01:13.749 ****** 2025-10-09 10:22:33.080728 | orchestrator | =============================================================================== 2025-10-09 10:22:33.080739 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 23.29s 2025-10-09 10:22:33.080750 | orchestrator | osism.services.netdata : Add repository -------------------------------- 13.14s 2025-10-09 10:22:33.080761 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 7.22s 2025-10-09 10:22:33.080772 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.50s 2025-10-09 10:22:33.080783 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.38s 2025-10-09 10:22:33.080793 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.25s 2025-10-09 10:22:33.080804 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.20s 2025-10-09 10:22:33.080815 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.18s 2025-10-09 10:22:33.080826 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.18s 2025-10-09 10:22:33.080836 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.98s 2025-10-09 10:22:33.080847 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.97s 2025-10-09 10:22:33.080863 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.83s 2025-10-09 10:22:33.080875 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.74s 2025-10-09 10:22:33.080886 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.52s 2025-10-09 10:22:33.080897 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.44s 2025-10-09 10:22:33.080908 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.37s 2025-10-09 10:22:33.080919 | orchestrator | 2025-10-09 10:22:33 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:22:33.080930 | orchestrator | 2025-10-09 10:22:33 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:36.130207 | orchestrator | 2025-10-09 10:22:36 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:22:36.132041 | orchestrator | 2025-10-09 10:22:36 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:22:36.133863 | orchestrator | 2025-10-09 10:22:36 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:22:36.135231 | orchestrator | 2025-10-09 10:22:36 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:22:36.135298 | orchestrator | 2025-10-09 10:22:36 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:39.170807 | orchestrator | 2025-10-09 10:22:39 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:22:39.172540 | orchestrator | 2025-10-09 10:22:39 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:22:39.174805 | orchestrator | 2025-10-09 10:22:39 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:22:39.176465 | orchestrator | 2025-10-09 10:22:39 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:22:39.176487 | orchestrator | 2025-10-09 10:22:39 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:42.213581 | orchestrator | 2025-10-09 10:22:42 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:22:42.214487 | orchestrator | 2025-10-09 10:22:42 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:22:42.216206 | orchestrator | 2025-10-09 10:22:42 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:22:42.217123 | orchestrator | 2025-10-09 10:22:42 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:22:42.217147 | orchestrator | 2025-10-09 10:22:42 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:45.267713 | orchestrator | 2025-10-09 10:22:45 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:22:45.267797 | orchestrator | 2025-10-09 10:22:45 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:22:45.267811 | orchestrator | 2025-10-09 10:22:45 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:22:45.267822 | orchestrator | 2025-10-09 10:22:45 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:22:45.267834 | orchestrator | 2025-10-09 10:22:45 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:48.309418 | orchestrator | 2025-10-09 10:22:48 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:22:48.312078 | orchestrator | 2025-10-09 10:22:48 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:22:48.312597 | orchestrator | 2025-10-09 10:22:48 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:22:48.314572 | orchestrator | 2025-10-09 10:22:48 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state STARTED 2025-10-09 10:22:48.314611 | orchestrator | 2025-10-09 10:22:48 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:51.372736 | orchestrator | 2025-10-09 10:22:51 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:22:51.373407 | orchestrator | 2025-10-09 10:22:51 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:22:51.374442 | orchestrator | 2025-10-09 10:22:51 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:22:51.376365 | orchestrator | 2025-10-09 10:22:51 | INFO  | Task 589aec32-0864-4140-9acd-4510bb3ffc9e is in state SUCCESS 2025-10-09 10:22:51.377665 | orchestrator | 2025-10-09 10:22:51 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:54.432045 | orchestrator | 2025-10-09 10:22:54 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:22:54.433709 | orchestrator | 2025-10-09 10:22:54 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:22:54.437477 | orchestrator | 2025-10-09 10:22:54 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:22:54.437514 | orchestrator | 2025-10-09 10:22:54 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:22:57.482690 | orchestrator | 2025-10-09 10:22:57 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:22:57.483318 | orchestrator | 2025-10-09 10:22:57 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:22:57.483736 | orchestrator | 2025-10-09 10:22:57 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:22:57.483780 | orchestrator | 2025-10-09 10:22:57 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:00.526882 | orchestrator | 2025-10-09 10:23:00 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:23:00.527429 | orchestrator | 2025-10-09 10:23:00 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:23:00.530272 | orchestrator | 2025-10-09 10:23:00 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:23:00.530322 | orchestrator | 2025-10-09 10:23:00 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:03.590533 | orchestrator | 2025-10-09 10:23:03 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:23:03.591537 | orchestrator | 2025-10-09 10:23:03 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:23:03.592862 | orchestrator | 2025-10-09 10:23:03 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:23:03.592887 | orchestrator | 2025-10-09 10:23:03 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:06.652653 | orchestrator | 2025-10-09 10:23:06 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:23:06.655745 | orchestrator | 2025-10-09 10:23:06 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:23:06.657045 | orchestrator | 2025-10-09 10:23:06 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:23:06.657069 | orchestrator | 2025-10-09 10:23:06 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:09.700759 | orchestrator | 2025-10-09 10:23:09 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:23:09.701780 | orchestrator | 2025-10-09 10:23:09 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:23:09.705138 | orchestrator | 2025-10-09 10:23:09 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:23:09.705161 | orchestrator | 2025-10-09 10:23:09 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:12.748718 | orchestrator | 2025-10-09 10:23:12 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:23:12.750331 | orchestrator | 2025-10-09 10:23:12 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:23:12.752743 | orchestrator | 2025-10-09 10:23:12 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:23:12.752767 | orchestrator | 2025-10-09 10:23:12 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:15.793637 | orchestrator | 2025-10-09 10:23:15 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:23:15.794639 | orchestrator | 2025-10-09 10:23:15 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:23:15.796428 | orchestrator | 2025-10-09 10:23:15 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:23:15.796451 | orchestrator | 2025-10-09 10:23:15 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:18.834893 | orchestrator | 2025-10-09 10:23:18 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:23:18.836461 | orchestrator | 2025-10-09 10:23:18 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:23:18.837777 | orchestrator | 2025-10-09 10:23:18 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:23:18.838088 | orchestrator | 2025-10-09 10:23:18 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:21.883492 | orchestrator | 2025-10-09 10:23:21 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:23:21.884813 | orchestrator | 2025-10-09 10:23:21 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:23:21.886931 | orchestrator | 2025-10-09 10:23:21 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:23:21.886981 | orchestrator | 2025-10-09 10:23:21 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:24.934638 | orchestrator | 2025-10-09 10:23:24 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:23:24.935662 | orchestrator | 2025-10-09 10:23:24 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:23:24.936922 | orchestrator | 2025-10-09 10:23:24 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:23:24.937425 | orchestrator | 2025-10-09 10:23:24 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:27.983402 | orchestrator | 2025-10-09 10:23:27 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:23:27.983616 | orchestrator | 2025-10-09 10:23:27 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:23:27.984914 | orchestrator | 2025-10-09 10:23:27 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:23:27.984937 | orchestrator | 2025-10-09 10:23:27 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:31.030814 | orchestrator | 2025-10-09 10:23:31 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:23:31.031219 | orchestrator | 2025-10-09 10:23:31 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:23:31.031969 | orchestrator | 2025-10-09 10:23:31 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:23:31.032042 | orchestrator | 2025-10-09 10:23:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:34.075704 | orchestrator | 2025-10-09 10:23:34 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:23:34.076461 | orchestrator | 2025-10-09 10:23:34 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:23:34.077094 | orchestrator | 2025-10-09 10:23:34 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:23:34.077121 | orchestrator | 2025-10-09 10:23:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:37.124348 | orchestrator | 2025-10-09 10:23:37 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:23:37.126746 | orchestrator | 2025-10-09 10:23:37 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:23:37.129224 | orchestrator | 2025-10-09 10:23:37 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:23:37.129276 | orchestrator | 2025-10-09 10:23:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:40.168794 | orchestrator | 2025-10-09 10:23:40 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:23:40.169011 | orchestrator | 2025-10-09 10:23:40 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state STARTED 2025-10-09 10:23:40.169908 | orchestrator | 2025-10-09 10:23:40 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:23:40.170064 | orchestrator | 2025-10-09 10:23:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:43.215417 | orchestrator | 2025-10-09 10:23:43 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:23:43.221351 | orchestrator | 2025-10-09 10:23:43.221424 | orchestrator | 2025-10-09 10:23:43.221449 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-10-09 10:23:43.221565 | orchestrator | 2025-10-09 10:23:43.221827 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-10-09 10:23:43.221853 | orchestrator | Thursday 09 October 2025 10:21:39 +0000 (0:00:00.285) 0:00:00.285 ****** 2025-10-09 10:23:43.221952 | orchestrator | ok: [testbed-manager] 2025-10-09 10:23:43.221979 | orchestrator | 2025-10-09 10:23:43.222005 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-10-09 10:23:43.222104 | orchestrator | Thursday 09 October 2025 10:21:40 +0000 (0:00:01.238) 0:00:01.523 ****** 2025-10-09 10:23:43.222132 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-10-09 10:23:43.222157 | orchestrator | 2025-10-09 10:23:43.222185 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-10-09 10:23:43.222210 | orchestrator | Thursday 09 October 2025 10:21:41 +0000 (0:00:00.666) 0:00:02.189 ****** 2025-10-09 10:23:43.222236 | orchestrator | changed: [testbed-manager] 2025-10-09 10:23:43.222304 | orchestrator | 2025-10-09 10:23:43.222324 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-10-09 10:23:43.222343 | orchestrator | Thursday 09 October 2025 10:21:42 +0000 (0:00:01.439) 0:00:03.629 ****** 2025-10-09 10:23:43.222363 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-10-09 10:23:43.222382 | orchestrator | ok: [testbed-manager] 2025-10-09 10:23:43.222401 | orchestrator | 2025-10-09 10:23:43.222420 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-10-09 10:23:43.222439 | orchestrator | Thursday 09 October 2025 10:22:40 +0000 (0:00:57.500) 0:01:01.129 ****** 2025-10-09 10:23:43.222458 | orchestrator | changed: [testbed-manager] 2025-10-09 10:23:43.222477 | orchestrator | 2025-10-09 10:23:43.222497 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:23:43.222518 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:23:43.222566 | orchestrator | 2025-10-09 10:23:43.222586 | orchestrator | 2025-10-09 10:23:43.222606 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:23:43.222626 | orchestrator | Thursday 09 October 2025 10:22:48 +0000 (0:00:08.493) 0:01:09.622 ****** 2025-10-09 10:23:43.222645 | orchestrator | =============================================================================== 2025-10-09 10:23:43.222665 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 57.50s 2025-10-09 10:23:43.222685 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 8.49s 2025-10-09 10:23:43.222716 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.44s 2025-10-09 10:23:43.222736 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.24s 2025-10-09 10:23:43.222753 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.67s 2025-10-09 10:23:43.222770 | orchestrator | 2025-10-09 10:23:43.222788 | orchestrator | 2025-10-09 10:23:43.222804 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-10-09 10:23:43.222822 | orchestrator | 2025-10-09 10:23:43.222839 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-10-09 10:23:43.222857 | orchestrator | Thursday 09 October 2025 10:21:08 +0000 (0:00:00.367) 0:00:00.367 ****** 2025-10-09 10:23:43.222875 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:23:43.222893 | orchestrator | 2025-10-09 10:23:43.222910 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-10-09 10:23:43.222929 | orchestrator | Thursday 09 October 2025 10:21:10 +0000 (0:00:01.520) 0:00:01.887 ****** 2025-10-09 10:23:43.222947 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-10-09 10:23:43.222964 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-10-09 10:23:43.222981 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-10-09 10:23:43.222999 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-10-09 10:23:43.223018 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-10-09 10:23:43.223060 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-10-09 10:23:43.223080 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-10-09 10:23:43.223099 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-10-09 10:23:43.223116 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-10-09 10:23:43.223134 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-10-09 10:23:43.223152 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-10-09 10:23:43.223171 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-10-09 10:23:43.223187 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-10-09 10:23:43.223204 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-10-09 10:23:43.223222 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-10-09 10:23:43.223267 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-10-09 10:23:43.223325 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-10-09 10:23:43.223345 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-10-09 10:23:43.223363 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-10-09 10:23:43.223380 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-10-09 10:23:43.223397 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-10-09 10:23:43.223414 | orchestrator | 2025-10-09 10:23:43.223432 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-10-09 10:23:43.223452 | orchestrator | Thursday 09 October 2025 10:21:14 +0000 (0:00:04.493) 0:00:06.381 ****** 2025-10-09 10:23:43.223470 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:23:43.223490 | orchestrator | 2025-10-09 10:23:43.223508 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-10-09 10:23:43.223527 | orchestrator | Thursday 09 October 2025 10:21:16 +0000 (0:00:01.160) 0:00:07.542 ****** 2025-10-09 10:23:43.223550 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.223588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.223608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.223634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.223646 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.223657 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.223684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.223696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.223708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.223729 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.223767 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.223788 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.223835 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.223856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.223870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.223882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.223893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.223910 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.223929 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.223941 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.223952 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.223964 | orchestrator | 2025-10-09 10:23:43.223975 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-10-09 10:23:43.223987 | orchestrator | Thursday 09 October 2025 10:21:21 +0000 (0:00:05.664) 0:00:13.206 ****** 2025-10-09 10:23:43.224012 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:23:43.224024 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224036 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224048 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:23:43.224060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:23:43.224084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:23:43.224117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224147 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:43.224157 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:43.224167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:23:43.224177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224209 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:43.224219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:23:43.224229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:23:43.224330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224358 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:43.224368 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:43.224379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:23:43.224389 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224410 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:43.224420 | orchestrator | 2025-10-09 10:23:43.224430 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-10-09 10:23:43.224440 | orchestrator | Thursday 09 October 2025 10:21:24 +0000 (0:00:02.645) 0:00:15.852 ****** 2025-10-09 10:23:43.224456 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:23:43.224467 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224483 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224498 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:23:43.224513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:23:43.224540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224583 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:43.224600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:23:43.224618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224653 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:43.224671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:23:43.224696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224743 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:43.224759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:23:43.224782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224816 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:43.224833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:23:43.224849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224937 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:43.224956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-10-09 10:23:43.224974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.224992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.225009 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:43.225026 | orchestrator | 2025-10-09 10:23:43.225043 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-10-09 10:23:43.225067 | orchestrator | Thursday 09 October 2025 10:21:28 +0000 (0:00:04.352) 0:00:20.204 ****** 2025-10-09 10:23:43.225083 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:43.225094 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:23:43.225104 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:43.225113 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:43.225123 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:43.225132 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:43.225142 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:43.225151 | orchestrator | 2025-10-09 10:23:43.225161 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-10-09 10:23:43.225170 | orchestrator | Thursday 09 October 2025 10:21:30 +0000 (0:00:01.444) 0:00:21.649 ****** 2025-10-09 10:23:43.225180 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:23:43.225189 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:23:43.225199 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:23:43.225209 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:23:43.225218 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:23:43.225228 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:23:43.225237 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:23:43.225311 | orchestrator | 2025-10-09 10:23:43.225321 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-10-09 10:23:43.225331 | orchestrator | Thursday 09 October 2025 10:21:32 +0000 (0:00:02.380) 0:00:24.029 ****** 2025-10-09 10:23:43.225341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.225352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.225383 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.225394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.225405 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.225415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.225429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.225440 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.225450 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.225467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.225484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.225495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.225505 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.225520 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.225530 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.225541 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.225558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.225582 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.225593 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.225603 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.225613 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.225623 | orchestrator | 2025-10-09 10:23:43.225633 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-10-09 10:23:43.225643 | orchestrator | Thursday 09 October 2025 10:21:41 +0000 (0:00:08.821) 0:00:32.851 ****** 2025-10-09 10:23:43.225653 | orchestrator | [WARNING]: Skipped 2025-10-09 10:23:43.225663 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-10-09 10:23:43.225673 | orchestrator | to this access issue: 2025-10-09 10:23:43.225683 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-10-09 10:23:43.225693 | orchestrator | directory 2025-10-09 10:23:43.225702 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:23:43.225712 | orchestrator | 2025-10-09 10:23:43.225726 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-10-09 10:23:43.225736 | orchestrator | Thursday 09 October 2025 10:21:43 +0000 (0:00:02.020) 0:00:34.872 ****** 2025-10-09 10:23:43.225746 | orchestrator | [WARNING]: Skipped 2025-10-09 10:23:43.225755 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-10-09 10:23:43.225765 | orchestrator | to this access issue: 2025-10-09 10:23:43.225775 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-10-09 10:23:43.225784 | orchestrator | directory 2025-10-09 10:23:43.225794 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:23:43.225810 | orchestrator | 2025-10-09 10:23:43.225820 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-10-09 10:23:43.225830 | orchestrator | Thursday 09 October 2025 10:21:44 +0000 (0:00:00.877) 0:00:35.749 ****** 2025-10-09 10:23:43.225840 | orchestrator | [WARNING]: Skipped 2025-10-09 10:23:43.225849 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-10-09 10:23:43.225859 | orchestrator | to this access issue: 2025-10-09 10:23:43.225869 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-10-09 10:23:43.225879 | orchestrator | directory 2025-10-09 10:23:43.225888 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:23:43.225898 | orchestrator | 2025-10-09 10:23:43.225908 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-10-09 10:23:43.225917 | orchestrator | Thursday 09 October 2025 10:21:45 +0000 (0:00:00.950) 0:00:36.699 ****** 2025-10-09 10:23:43.225927 | orchestrator | [WARNING]: Skipped 2025-10-09 10:23:43.225937 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-10-09 10:23:43.225946 | orchestrator | to this access issue: 2025-10-09 10:23:43.225956 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-10-09 10:23:43.225966 | orchestrator | directory 2025-10-09 10:23:43.225975 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:23:43.225985 | orchestrator | 2025-10-09 10:23:43.225995 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-10-09 10:23:43.226004 | orchestrator | Thursday 09 October 2025 10:21:46 +0000 (0:00:00.731) 0:00:37.431 ****** 2025-10-09 10:23:43.226045 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:43.226058 | orchestrator | changed: [testbed-manager] 2025-10-09 10:23:43.226067 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:43.226077 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:23:43.226086 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:43.226096 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:23:43.226106 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:23:43.226115 | orchestrator | 2025-10-09 10:23:43.226125 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-10-09 10:23:43.226135 | orchestrator | Thursday 09 October 2025 10:21:50 +0000 (0:00:04.570) 0:00:42.001 ****** 2025-10-09 10:23:43.226145 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-10-09 10:23:43.226154 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-10-09 10:23:43.226164 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-10-09 10:23:43.226180 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-10-09 10:23:43.226191 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-10-09 10:23:43.226200 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-10-09 10:23:43.226210 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-10-09 10:23:43.226219 | orchestrator | 2025-10-09 10:23:43.226229 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-10-09 10:23:43.226239 | orchestrator | Thursday 09 October 2025 10:21:55 +0000 (0:00:04.610) 0:00:46.612 ****** 2025-10-09 10:23:43.226277 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:43.226294 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:43.226310 | orchestrator | changed: [testbed-manager] 2025-10-09 10:23:43.226320 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:43.226330 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:23:43.226339 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:23:43.226349 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:23:43.226367 | orchestrator | 2025-10-09 10:23:43.226377 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-10-09 10:23:43.226386 | orchestrator | Thursday 09 October 2025 10:21:59 +0000 (0:00:03.897) 0:00:50.510 ****** 2025-10-09 10:23:43.226396 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.226412 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.226423 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.226434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.226444 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.226465 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.226475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.226493 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.226503 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.226513 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.226529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.226539 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.226550 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.226566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.226585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:23:43.226595 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.226611 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.226622 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.226632 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.226643 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.226653 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.226662 | orchestrator | 2025-10-09 10:23:43.226672 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-10-09 10:23:43.226682 | orchestrator | Thursday 09 October 2025 10:22:02 +0000 (0:00:03.482) 0:00:53.992 ****** 2025-10-09 10:23:43.226692 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-10-09 10:23:43.226702 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-10-09 10:23:43.226712 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-10-09 10:23:43.226736 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-10-09 10:23:43.226746 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-10-09 10:23:43.226756 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-10-09 10:23:43.226766 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-10-09 10:23:43.226775 | orchestrator | 2025-10-09 10:23:43.226785 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-10-09 10:23:43.226795 | orchestrator | Thursday 09 October 2025 10:22:05 +0000 (0:00:03.185) 0:00:57.177 ****** 2025-10-09 10:23:43.226805 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-10-09 10:23:43.226815 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-10-09 10:23:43.226824 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-10-09 10:23:43.226834 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-10-09 10:23:43.226844 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-10-09 10:23:43.226854 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-10-09 10:23:43.226863 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-10-09 10:23:43.226873 | orchestrator | 2025-10-09 10:23:43.226883 | orchestrator | TASK [common : Check common containers] **************************************** 2025-10-09 10:23:43.226892 | orchestrator | Thursday 09 October 2025 10:22:08 +0000 (0:00:03.131) 0:01:00.308 ****** 2025-10-09 10:23:43.226907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.226918 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.226928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.226939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.226958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.227112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.227128 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.227138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.227154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.227165 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.227175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.227186 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.227203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.227220 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.227231 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-10-09 10:23:43.227299 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.227318 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.227329 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.227339 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.227357 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.227367 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:23:43.227377 | orchestrator | 2025-10-09 10:23:43.227392 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-10-09 10:23:43.227402 | orchestrator | Thursday 09 October 2025 10:22:14 +0000 (0:00:05.859) 0:01:06.167 ****** 2025-10-09 10:23:43.227412 | orchestrator | changed: [testbed-manager] 2025-10-09 10:23:43.227422 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:43.227432 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:43.227442 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:43.227452 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:23:43.227461 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:23:43.227471 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:23:43.227480 | orchestrator | 2025-10-09 10:23:43.227490 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-10-09 10:23:43.227500 | orchestrator | Thursday 09 October 2025 10:22:16 +0000 (0:00:01.739) 0:01:07.906 ****** 2025-10-09 10:23:43.227509 | orchestrator | changed: [testbed-manager] 2025-10-09 10:23:43.227519 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:43.227529 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:43.227539 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:43.227548 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:23:43.227557 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:23:43.227567 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:23:43.227577 | orchestrator | 2025-10-09 10:23:43.227586 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-10-09 10:23:43.227596 | orchestrator | Thursday 09 October 2025 10:22:17 +0000 (0:00:01.244) 0:01:09.151 ****** 2025-10-09 10:23:43.227606 | orchestrator | 2025-10-09 10:23:43.227616 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-10-09 10:23:43.227625 | orchestrator | Thursday 09 October 2025 10:22:17 +0000 (0:00:00.085) 0:01:09.237 ****** 2025-10-09 10:23:43.227635 | orchestrator | 2025-10-09 10:23:43.227644 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-10-09 10:23:43.227654 | orchestrator | Thursday 09 October 2025 10:22:17 +0000 (0:00:00.063) 0:01:09.300 ****** 2025-10-09 10:23:43.227663 | orchestrator | 2025-10-09 10:23:43.227673 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-10-09 10:23:43.227683 | orchestrator | Thursday 09 October 2025 10:22:17 +0000 (0:00:00.064) 0:01:09.365 ****** 2025-10-09 10:23:43.227692 | orchestrator | 2025-10-09 10:23:43.227702 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-10-09 10:23:43.227711 | orchestrator | Thursday 09 October 2025 10:22:18 +0000 (0:00:00.231) 0:01:09.597 ****** 2025-10-09 10:23:43.227721 | orchestrator | 2025-10-09 10:23:43.227730 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-10-09 10:23:43.227740 | orchestrator | Thursday 09 October 2025 10:22:18 +0000 (0:00:00.067) 0:01:09.664 ****** 2025-10-09 10:23:43.227749 | orchestrator | 2025-10-09 10:23:43.227763 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-10-09 10:23:43.227780 | orchestrator | Thursday 09 October 2025 10:22:18 +0000 (0:00:00.070) 0:01:09.735 ****** 2025-10-09 10:23:43.227789 | orchestrator | 2025-10-09 10:23:43.227799 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-10-09 10:23:43.227809 | orchestrator | Thursday 09 October 2025 10:22:18 +0000 (0:00:00.123) 0:01:09.858 ****** 2025-10-09 10:23:43.227818 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:43.227828 | orchestrator | changed: [testbed-manager] 2025-10-09 10:23:43.227838 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:23:43.227848 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:23:43.227857 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:43.227867 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:43.227876 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:23:43.227886 | orchestrator | 2025-10-09 10:23:43.227895 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-10-09 10:23:43.227905 | orchestrator | Thursday 09 October 2025 10:22:56 +0000 (0:00:38.545) 0:01:48.404 ****** 2025-10-09 10:23:43.227915 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:43.227924 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:23:43.227933 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:23:43.227943 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:43.227953 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:23:43.227962 | orchestrator | changed: [testbed-manager] 2025-10-09 10:23:43.227972 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:43.227981 | orchestrator | 2025-10-09 10:23:43.227991 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-10-09 10:23:43.228001 | orchestrator | Thursday 09 October 2025 10:23:29 +0000 (0:00:32.839) 0:02:21.243 ****** 2025-10-09 10:23:43.228010 | orchestrator | ok: [testbed-manager] 2025-10-09 10:23:43.228020 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:23:43.228029 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:23:43.228039 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:23:43.228048 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:23:43.228058 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:23:43.228067 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:23:43.228077 | orchestrator | 2025-10-09 10:23:43.228086 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-10-09 10:23:43.228096 | orchestrator | Thursday 09 October 2025 10:23:32 +0000 (0:00:02.414) 0:02:23.658 ****** 2025-10-09 10:23:43.228106 | orchestrator | changed: [testbed-manager] 2025-10-09 10:23:43.228116 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:23:43.228125 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:23:43.228135 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:23:43.228144 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:23:43.228154 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:23:43.228163 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:23:43.228173 | orchestrator | 2025-10-09 10:23:43.228182 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:23:43.228193 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-10-09 10:23:43.228204 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-10-09 10:23:43.228218 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-10-09 10:23:43.228228 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-10-09 10:23:43.228238 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-10-09 10:23:43.228276 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-10-09 10:23:43.228286 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-10-09 10:23:43.228296 | orchestrator | 2025-10-09 10:23:43.228306 | orchestrator | 2025-10-09 10:23:43.228315 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:23:43.228325 | orchestrator | Thursday 09 October 2025 10:23:41 +0000 (0:00:09.554) 0:02:33.212 ****** 2025-10-09 10:23:43.228335 | orchestrator | =============================================================================== 2025-10-09 10:23:43.228345 | orchestrator | common : Restart fluentd container ------------------------------------- 38.55s 2025-10-09 10:23:43.228355 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.84s 2025-10-09 10:23:43.228365 | orchestrator | common : Restart cron container ----------------------------------------- 9.55s 2025-10-09 10:23:43.228374 | orchestrator | common : Copying over config.json files for services -------------------- 8.82s 2025-10-09 10:23:43.228384 | orchestrator | common : Check common containers ---------------------------------------- 5.86s 2025-10-09 10:23:43.228394 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.66s 2025-10-09 10:23:43.228403 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.61s 2025-10-09 10:23:43.228413 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.57s 2025-10-09 10:23:43.228423 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.49s 2025-10-09 10:23:43.228432 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 4.35s 2025-10-09 10:23:43.228442 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.90s 2025-10-09 10:23:43.228456 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.48s 2025-10-09 10:23:43.228466 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.19s 2025-10-09 10:23:43.228476 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.13s 2025-10-09 10:23:43.228486 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.65s 2025-10-09 10:23:43.228496 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.41s 2025-10-09 10:23:43.228505 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.38s 2025-10-09 10:23:43.228515 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.02s 2025-10-09 10:23:43.228525 | orchestrator | common : Creating log volume -------------------------------------------- 1.74s 2025-10-09 10:23:43.228534 | orchestrator | common : include_tasks -------------------------------------------------- 1.52s 2025-10-09 10:23:43.228544 | orchestrator | 2025-10-09 10:23:43 | INFO  | Task e709694a-6adc-46b5-84d0-ecc938bc05b9 is in state SUCCESS 2025-10-09 10:23:43.228554 | orchestrator | 2025-10-09 10:23:43 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:23:43.228564 | orchestrator | 2025-10-09 10:23:43 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:46.263613 | orchestrator | 2025-10-09 10:23:46 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:23:46.263711 | orchestrator | 2025-10-09 10:23:46 | INFO  | Task c3c8b2e5-54b0-40d3-9077-c0df935f3c9a is in state STARTED 2025-10-09 10:23:46.263726 | orchestrator | 2025-10-09 10:23:46 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:23:46.266310 | orchestrator | 2025-10-09 10:23:46 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:23:46.266349 | orchestrator | 2025-10-09 10:23:46 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:23:46.266387 | orchestrator | 2025-10-09 10:23:46 | INFO  | Task 10e9876c-6d41-4310-b137-2be080e4a286 is in state STARTED 2025-10-09 10:23:46.266399 | orchestrator | 2025-10-09 10:23:46 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:49.308145 | orchestrator | 2025-10-09 10:23:49 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:23:49.308670 | orchestrator | 2025-10-09 10:23:49 | INFO  | Task c3c8b2e5-54b0-40d3-9077-c0df935f3c9a is in state STARTED 2025-10-09 10:23:49.309655 | orchestrator | 2025-10-09 10:23:49 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:23:49.310643 | orchestrator | 2025-10-09 10:23:49 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:23:49.313891 | orchestrator | 2025-10-09 10:23:49 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:23:49.314768 | orchestrator | 2025-10-09 10:23:49 | INFO  | Task 10e9876c-6d41-4310-b137-2be080e4a286 is in state STARTED 2025-10-09 10:23:49.314797 | orchestrator | 2025-10-09 10:23:49 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:52.348436 | orchestrator | 2025-10-09 10:23:52 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:23:52.350133 | orchestrator | 2025-10-09 10:23:52 | INFO  | Task c3c8b2e5-54b0-40d3-9077-c0df935f3c9a is in state STARTED 2025-10-09 10:23:52.351177 | orchestrator | 2025-10-09 10:23:52 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:23:52.352464 | orchestrator | 2025-10-09 10:23:52 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:23:52.353614 | orchestrator | 2025-10-09 10:23:52 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:23:52.355778 | orchestrator | 2025-10-09 10:23:52 | INFO  | Task 10e9876c-6d41-4310-b137-2be080e4a286 is in state STARTED 2025-10-09 10:23:52.355808 | orchestrator | 2025-10-09 10:23:52 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:55.405292 | orchestrator | 2025-10-09 10:23:55 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:23:55.411342 | orchestrator | 2025-10-09 10:23:55 | INFO  | Task c3c8b2e5-54b0-40d3-9077-c0df935f3c9a is in state STARTED 2025-10-09 10:23:55.421787 | orchestrator | 2025-10-09 10:23:55 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:23:55.431941 | orchestrator | 2025-10-09 10:23:55 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:23:55.443384 | orchestrator | 2025-10-09 10:23:55 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:23:55.446984 | orchestrator | 2025-10-09 10:23:55 | INFO  | Task 10e9876c-6d41-4310-b137-2be080e4a286 is in state STARTED 2025-10-09 10:23:55.447027 | orchestrator | 2025-10-09 10:23:55 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:23:58.484104 | orchestrator | 2025-10-09 10:23:58 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:23:58.485817 | orchestrator | 2025-10-09 10:23:58 | INFO  | Task c3c8b2e5-54b0-40d3-9077-c0df935f3c9a is in state STARTED 2025-10-09 10:23:58.486933 | orchestrator | 2025-10-09 10:23:58 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:23:58.488515 | orchestrator | 2025-10-09 10:23:58 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:23:58.489511 | orchestrator | 2025-10-09 10:23:58 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:23:58.490966 | orchestrator | 2025-10-09 10:23:58 | INFO  | Task 10e9876c-6d41-4310-b137-2be080e4a286 is in state STARTED 2025-10-09 10:23:58.490995 | orchestrator | 2025-10-09 10:23:58 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:01.540049 | orchestrator | 2025-10-09 10:24:01 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:24:01.541999 | orchestrator | 2025-10-09 10:24:01 | INFO  | Task c3c8b2e5-54b0-40d3-9077-c0df935f3c9a is in state STARTED 2025-10-09 10:24:01.542771 | orchestrator | 2025-10-09 10:24:01 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:24:01.544162 | orchestrator | 2025-10-09 10:24:01 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:24:01.547470 | orchestrator | 2025-10-09 10:24:01 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:24:01.558808 | orchestrator | 2025-10-09 10:24:01 | INFO  | Task 10e9876c-6d41-4310-b137-2be080e4a286 is in state STARTED 2025-10-09 10:24:01.558839 | orchestrator | 2025-10-09 10:24:01 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:04.634140 | orchestrator | 2025-10-09 10:24:04 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:24:04.635451 | orchestrator | 2025-10-09 10:24:04 | INFO  | Task c3c8b2e5-54b0-40d3-9077-c0df935f3c9a is in state STARTED 2025-10-09 10:24:04.638373 | orchestrator | 2025-10-09 10:24:04 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:24:04.639209 | orchestrator | 2025-10-09 10:24:04 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:24:04.640556 | orchestrator | 2025-10-09 10:24:04 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:24:04.642115 | orchestrator | 2025-10-09 10:24:04 | INFO  | Task 10e9876c-6d41-4310-b137-2be080e4a286 is in state STARTED 2025-10-09 10:24:04.642382 | orchestrator | 2025-10-09 10:24:04 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:07.671847 | orchestrator | 2025-10-09 10:24:07 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:24:07.672509 | orchestrator | 2025-10-09 10:24:07 | INFO  | Task c3c8b2e5-54b0-40d3-9077-c0df935f3c9a is in state SUCCESS 2025-10-09 10:24:07.673936 | orchestrator | 2025-10-09 10:24:07 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:24:07.675413 | orchestrator | 2025-10-09 10:24:07 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:24:07.677261 | orchestrator | 2025-10-09 10:24:07 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:24:07.678395 | orchestrator | 2025-10-09 10:24:07 | INFO  | Task 10e9876c-6d41-4310-b137-2be080e4a286 is in state STARTED 2025-10-09 10:24:07.678500 | orchestrator | 2025-10-09 10:24:07 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:10.752025 | orchestrator | 2025-10-09 10:24:10 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:24:10.752123 | orchestrator | 2025-10-09 10:24:10 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:24:10.752136 | orchestrator | 2025-10-09 10:24:10 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:24:10.752147 | orchestrator | 2025-10-09 10:24:10 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:24:10.752157 | orchestrator | 2025-10-09 10:24:10 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:24:10.752210 | orchestrator | 2025-10-09 10:24:10 | INFO  | Task 10e9876c-6d41-4310-b137-2be080e4a286 is in state STARTED 2025-10-09 10:24:10.752221 | orchestrator | 2025-10-09 10:24:10 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:14.150878 | orchestrator | 2025-10-09 10:24:14 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:24:14.151697 | orchestrator | 2025-10-09 10:24:14 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:24:14.153016 | orchestrator | 2025-10-09 10:24:14 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:24:14.157403 | orchestrator | 2025-10-09 10:24:14 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:24:14.157426 | orchestrator | 2025-10-09 10:24:14 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:24:14.157436 | orchestrator | 2025-10-09 10:24:14 | INFO  | Task 10e9876c-6d41-4310-b137-2be080e4a286 is in state STARTED 2025-10-09 10:24:14.157447 | orchestrator | 2025-10-09 10:24:14 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:17.286512 | orchestrator | 2025-10-09 10:24:17 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:24:17.286592 | orchestrator | 2025-10-09 10:24:17 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:24:17.286605 | orchestrator | 2025-10-09 10:24:17 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:24:17.286617 | orchestrator | 2025-10-09 10:24:17 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:24:17.286628 | orchestrator | 2025-10-09 10:24:17 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:24:17.286639 | orchestrator | 2025-10-09 10:24:17 | INFO  | Task 10e9876c-6d41-4310-b137-2be080e4a286 is in state STARTED 2025-10-09 10:24:17.286650 | orchestrator | 2025-10-09 10:24:17 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:20.327803 | orchestrator | 2025-10-09 10:24:20 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:24:20.328294 | orchestrator | 2025-10-09 10:24:20 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:24:20.331123 | orchestrator | 2025-10-09 10:24:20 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:24:20.335178 | orchestrator | 2025-10-09 10:24:20 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:24:20.336295 | orchestrator | 2025-10-09 10:24:20 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:24:20.337456 | orchestrator | 2025-10-09 10:24:20 | INFO  | Task 10e9876c-6d41-4310-b137-2be080e4a286 is in state SUCCESS 2025-10-09 10:24:20.339342 | orchestrator | 2025-10-09 10:24:20.339372 | orchestrator | 2025-10-09 10:24:20.339385 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:24:20.339397 | orchestrator | 2025-10-09 10:24:20.339408 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:24:20.339420 | orchestrator | Thursday 09 October 2025 10:23:50 +0000 (0:00:00.407) 0:00:00.408 ****** 2025-10-09 10:24:20.339431 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:24:20.339443 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:24:20.339454 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:24:20.339465 | orchestrator | 2025-10-09 10:24:20.339476 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:24:20.339487 | orchestrator | Thursday 09 October 2025 10:23:50 +0000 (0:00:00.401) 0:00:00.809 ****** 2025-10-09 10:24:20.339498 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-10-09 10:24:20.339536 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-10-09 10:24:20.339547 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-10-09 10:24:20.339558 | orchestrator | 2025-10-09 10:24:20.339569 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-10-09 10:24:20.339580 | orchestrator | 2025-10-09 10:24:20.339591 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-10-09 10:24:20.339601 | orchestrator | Thursday 09 October 2025 10:23:51 +0000 (0:00:00.723) 0:00:01.532 ****** 2025-10-09 10:24:20.339612 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:24:20.339625 | orchestrator | 2025-10-09 10:24:20.339636 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-10-09 10:24:20.339647 | orchestrator | Thursday 09 October 2025 10:23:52 +0000 (0:00:01.181) 0:00:02.714 ****** 2025-10-09 10:24:20.339658 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-10-09 10:24:20.339670 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-10-09 10:24:20.339680 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-10-09 10:24:20.339691 | orchestrator | 2025-10-09 10:24:20.339702 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-10-09 10:24:20.339713 | orchestrator | Thursday 09 October 2025 10:23:53 +0000 (0:00:00.961) 0:00:03.675 ****** 2025-10-09 10:24:20.339724 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-10-09 10:24:20.339735 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-10-09 10:24:20.339745 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-10-09 10:24:20.339756 | orchestrator | 2025-10-09 10:24:20.339767 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-10-09 10:24:20.339778 | orchestrator | Thursday 09 October 2025 10:23:56 +0000 (0:00:02.873) 0:00:06.549 ****** 2025-10-09 10:24:20.339788 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:24:20.339799 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:24:20.339827 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:24:20.339839 | orchestrator | 2025-10-09 10:24:20.339850 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-10-09 10:24:20.339860 | orchestrator | Thursday 09 October 2025 10:23:58 +0000 (0:00:02.538) 0:00:09.087 ****** 2025-10-09 10:24:20.339871 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:24:20.339882 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:24:20.339893 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:24:20.339903 | orchestrator | 2025-10-09 10:24:20.339914 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:24:20.339925 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:24:20.339939 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:24:20.339952 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:24:20.339965 | orchestrator | 2025-10-09 10:24:20.339977 | orchestrator | 2025-10-09 10:24:20.339989 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:24:20.340001 | orchestrator | Thursday 09 October 2025 10:24:06 +0000 (0:00:08.139) 0:00:17.227 ****** 2025-10-09 10:24:20.340013 | orchestrator | =============================================================================== 2025-10-09 10:24:20.340025 | orchestrator | memcached : Restart memcached container --------------------------------- 8.14s 2025-10-09 10:24:20.340038 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.87s 2025-10-09 10:24:20.340050 | orchestrator | memcached : Check memcached container ----------------------------------- 2.54s 2025-10-09 10:24:20.340062 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.18s 2025-10-09 10:24:20.340082 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.96s 2025-10-09 10:24:20.340095 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.72s 2025-10-09 10:24:20.340107 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2025-10-09 10:24:20.340119 | orchestrator | 2025-10-09 10:24:20.340131 | orchestrator | 2025-10-09 10:24:20.340143 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:24:20.340155 | orchestrator | 2025-10-09 10:24:20.340167 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:24:20.340180 | orchestrator | Thursday 09 October 2025 10:23:49 +0000 (0:00:00.297) 0:00:00.297 ****** 2025-10-09 10:24:20.340192 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:24:20.340204 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:24:20.340216 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:24:20.340228 | orchestrator | 2025-10-09 10:24:20.340272 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:24:20.340297 | orchestrator | Thursday 09 October 2025 10:23:50 +0000 (0:00:00.387) 0:00:00.684 ****** 2025-10-09 10:24:20.340309 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-10-09 10:24:20.340320 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-10-09 10:24:20.340331 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-10-09 10:24:20.340342 | orchestrator | 2025-10-09 10:24:20.340352 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-10-09 10:24:20.340363 | orchestrator | 2025-10-09 10:24:20.340374 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-10-09 10:24:20.340385 | orchestrator | Thursday 09 October 2025 10:23:50 +0000 (0:00:00.680) 0:00:01.365 ****** 2025-10-09 10:24:20.340396 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:24:20.340407 | orchestrator | 2025-10-09 10:24:20.340418 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-10-09 10:24:20.340429 | orchestrator | Thursday 09 October 2025 10:23:51 +0000 (0:00:00.879) 0:00:02.245 ****** 2025-10-09 10:24:20.340442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340540 | orchestrator | 2025-10-09 10:24:20.340551 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-10-09 10:24:20.340562 | orchestrator | Thursday 09 October 2025 10:23:53 +0000 (0:00:01.630) 0:00:03.875 ****** 2025-10-09 10:24:20.340574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340670 | orchestrator | 2025-10-09 10:24:20.340689 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-10-09 10:24:20.340705 | orchestrator | Thursday 09 October 2025 10:23:57 +0000 (0:00:03.830) 0:00:07.706 ****** 2025-10-09 10:24:20.340723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340850 | orchestrator | 2025-10-09 10:24:20.340876 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-10-09 10:24:20.340896 | orchestrator | Thursday 09 October 2025 10:24:01 +0000 (0:00:04.029) 0:00:11.736 ****** 2025-10-09 10:24:20.340915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.340997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.341008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-10-09 10:24:20.341020 | orchestrator | 2025-10-09 10:24:20.341031 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-10-09 10:24:20.341042 | orchestrator | Thursday 09 October 2025 10:24:03 +0000 (0:00:02.192) 0:00:13.929 ****** 2025-10-09 10:24:20.341052 | orchestrator | 2025-10-09 10:24:20.341063 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-10-09 10:24:20.341081 | orchestrator | Thursday 09 October 2025 10:24:03 +0000 (0:00:00.142) 0:00:14.071 ****** 2025-10-09 10:24:20.341102 | orchestrator | 2025-10-09 10:24:20.341119 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-10-09 10:24:20.341136 | orchestrator | Thursday 09 October 2025 10:24:03 +0000 (0:00:00.129) 0:00:14.201 ****** 2025-10-09 10:24:20.341155 | orchestrator | 2025-10-09 10:24:20.341175 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-10-09 10:24:20.341194 | orchestrator | Thursday 09 October 2025 10:24:03 +0000 (0:00:00.136) 0:00:14.338 ****** 2025-10-09 10:24:20.341211 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:24:20.341223 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:24:20.341234 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:24:20.341267 | orchestrator | 2025-10-09 10:24:20.341278 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-10-09 10:24:20.341289 | orchestrator | Thursday 09 October 2025 10:24:10 +0000 (0:00:07.029) 0:00:21.367 ****** 2025-10-09 10:24:20.341300 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:24:20.341311 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:24:20.341321 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:24:20.341332 | orchestrator | 2025-10-09 10:24:20.341343 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:24:20.341354 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:24:20.341374 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:24:20.341385 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:24:20.341396 | orchestrator | 2025-10-09 10:24:20.341407 | orchestrator | 2025-10-09 10:24:20.341418 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:24:20.341429 | orchestrator | Thursday 09 October 2025 10:24:16 +0000 (0:00:05.613) 0:00:26.981 ****** 2025-10-09 10:24:20.341440 | orchestrator | =============================================================================== 2025-10-09 10:24:20.341457 | orchestrator | redis : Restart redis container ----------------------------------------- 7.03s 2025-10-09 10:24:20.341468 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 5.61s 2025-10-09 10:24:20.341479 | orchestrator | redis : Copying over redis config files --------------------------------- 4.03s 2025-10-09 10:24:20.341490 | orchestrator | redis : Copying over default config.json files -------------------------- 3.83s 2025-10-09 10:24:20.341501 | orchestrator | redis : Check redis containers ------------------------------------------ 2.19s 2025-10-09 10:24:20.341512 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.63s 2025-10-09 10:24:20.341523 | orchestrator | redis : include_tasks --------------------------------------------------- 0.88s 2025-10-09 10:24:20.341533 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2025-10-09 10:24:20.341544 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.41s 2025-10-09 10:24:20.341555 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2025-10-09 10:24:20.341566 | orchestrator | 2025-10-09 10:24:20 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:23.382311 | orchestrator | 2025-10-09 10:24:23 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:24:23.383233 | orchestrator | 2025-10-09 10:24:23 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:24:23.386649 | orchestrator | 2025-10-09 10:24:23 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:24:23.387740 | orchestrator | 2025-10-09 10:24:23 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:24:23.389074 | orchestrator | 2025-10-09 10:24:23 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:24:23.391288 | orchestrator | 2025-10-09 10:24:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:26.440814 | orchestrator | 2025-10-09 10:24:26 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:24:26.440885 | orchestrator | 2025-10-09 10:24:26 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:24:26.441444 | orchestrator | 2025-10-09 10:24:26 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:24:26.444128 | orchestrator | 2025-10-09 10:24:26 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:24:26.444151 | orchestrator | 2025-10-09 10:24:26 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:24:26.444163 | orchestrator | 2025-10-09 10:24:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:29.524355 | orchestrator | 2025-10-09 10:24:29 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:24:29.524413 | orchestrator | 2025-10-09 10:24:29 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:24:29.524455 | orchestrator | 2025-10-09 10:24:29 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:24:29.524467 | orchestrator | 2025-10-09 10:24:29 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:24:29.524478 | orchestrator | 2025-10-09 10:24:29 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:24:29.524489 | orchestrator | 2025-10-09 10:24:29 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:32.757182 | orchestrator | 2025-10-09 10:24:32 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:24:32.757329 | orchestrator | 2025-10-09 10:24:32 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:24:32.757345 | orchestrator | 2025-10-09 10:24:32 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:24:32.757357 | orchestrator | 2025-10-09 10:24:32 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:24:32.757368 | orchestrator | 2025-10-09 10:24:32 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:24:32.757379 | orchestrator | 2025-10-09 10:24:32 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:35.702422 | orchestrator | 2025-10-09 10:24:35 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:24:35.702516 | orchestrator | 2025-10-09 10:24:35 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:24:35.702537 | orchestrator | 2025-10-09 10:24:35 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:24:35.702577 | orchestrator | 2025-10-09 10:24:35 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:24:35.702592 | orchestrator | 2025-10-09 10:24:35 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:24:35.702603 | orchestrator | 2025-10-09 10:24:35 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:38.749866 | orchestrator | 2025-10-09 10:24:38 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:24:38.752768 | orchestrator | 2025-10-09 10:24:38 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:24:38.753570 | orchestrator | 2025-10-09 10:24:38 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:24:38.754556 | orchestrator | 2025-10-09 10:24:38 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:24:38.755175 | orchestrator | 2025-10-09 10:24:38 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:24:38.755404 | orchestrator | 2025-10-09 10:24:38 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:41.799635 | orchestrator | 2025-10-09 10:24:41 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:24:41.800112 | orchestrator | 2025-10-09 10:24:41 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:24:41.801351 | orchestrator | 2025-10-09 10:24:41 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:24:41.802221 | orchestrator | 2025-10-09 10:24:41 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:24:41.803198 | orchestrator | 2025-10-09 10:24:41 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:24:41.803391 | orchestrator | 2025-10-09 10:24:41 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:44.854409 | orchestrator | 2025-10-09 10:24:44 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:24:44.854829 | orchestrator | 2025-10-09 10:24:44 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:24:44.855803 | orchestrator | 2025-10-09 10:24:44 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:24:44.857604 | orchestrator | 2025-10-09 10:24:44 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:24:44.859783 | orchestrator | 2025-10-09 10:24:44 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:24:44.859807 | orchestrator | 2025-10-09 10:24:44 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:47.924566 | orchestrator | 2025-10-09 10:24:47 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:24:47.924654 | orchestrator | 2025-10-09 10:24:47 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:24:47.925093 | orchestrator | 2025-10-09 10:24:47 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:24:47.926311 | orchestrator | 2025-10-09 10:24:47 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:24:47.927968 | orchestrator | 2025-10-09 10:24:47 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:24:47.927986 | orchestrator | 2025-10-09 10:24:47 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:50.972369 | orchestrator | 2025-10-09 10:24:50 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:24:50.972954 | orchestrator | 2025-10-09 10:24:50 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:24:50.973973 | orchestrator | 2025-10-09 10:24:50 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:24:50.974668 | orchestrator | 2025-10-09 10:24:50 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:24:50.975891 | orchestrator | 2025-10-09 10:24:50 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:24:50.975917 | orchestrator | 2025-10-09 10:24:50 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:54.032054 | orchestrator | 2025-10-09 10:24:54 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:24:54.037871 | orchestrator | 2025-10-09 10:24:54 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:24:54.045635 | orchestrator | 2025-10-09 10:24:54 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:24:54.046143 | orchestrator | 2025-10-09 10:24:54 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:24:54.047112 | orchestrator | 2025-10-09 10:24:54 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:24:54.047135 | orchestrator | 2025-10-09 10:24:54 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:24:57.105469 | orchestrator | 2025-10-09 10:24:57 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:24:57.105562 | orchestrator | 2025-10-09 10:24:57 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:24:57.105576 | orchestrator | 2025-10-09 10:24:57 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:24:57.106137 | orchestrator | 2025-10-09 10:24:57 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:24:57.107038 | orchestrator | 2025-10-09 10:24:57 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:24:57.107095 | orchestrator | 2025-10-09 10:24:57 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:00.160653 | orchestrator | 2025-10-09 10:25:00 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:25:00.161186 | orchestrator | 2025-10-09 10:25:00 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:25:00.162093 | orchestrator | 2025-10-09 10:25:00 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:25:00.163102 | orchestrator | 2025-10-09 10:25:00 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:25:00.163817 | orchestrator | 2025-10-09 10:25:00 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:25:00.163842 | orchestrator | 2025-10-09 10:25:00 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:03.205642 | orchestrator | 2025-10-09 10:25:03 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:25:03.206478 | orchestrator | 2025-10-09 10:25:03 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:25:03.266432 | orchestrator | 2025-10-09 10:25:03 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:25:03.266468 | orchestrator | 2025-10-09 10:25:03 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:25:03.266480 | orchestrator | 2025-10-09 10:25:03 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:25:03.266492 | orchestrator | 2025-10-09 10:25:03 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:06.348577 | orchestrator | 2025-10-09 10:25:06 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:25:06.352015 | orchestrator | 2025-10-09 10:25:06 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:25:06.360124 | orchestrator | 2025-10-09 10:25:06 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:25:06.363795 | orchestrator | 2025-10-09 10:25:06 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:25:06.366963 | orchestrator | 2025-10-09 10:25:06 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state STARTED 2025-10-09 10:25:06.370119 | orchestrator | 2025-10-09 10:25:06 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:09.565502 | orchestrator | 2025-10-09 10:25:09 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:25:09.565593 | orchestrator | 2025-10-09 10:25:09 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:25:09.565609 | orchestrator | 2025-10-09 10:25:09 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:25:09.565621 | orchestrator | 2025-10-09 10:25:09 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:25:09.568528 | orchestrator | 2025-10-09 10:25:09 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:25:09.569937 | orchestrator | 2025-10-09 10:25:09 | INFO  | Task 29273de1-d6d4-4218-9037-c4aaa1f8e4df is in state SUCCESS 2025-10-09 10:25:09.570983 | orchestrator | 2025-10-09 10:25:09 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:09.572466 | orchestrator | 2025-10-09 10:25:09.572494 | orchestrator | 2025-10-09 10:25:09.572506 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:25:09.572518 | orchestrator | 2025-10-09 10:25:09.572543 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:25:09.572575 | orchestrator | Thursday 09 October 2025 10:23:50 +0000 (0:00:00.397) 0:00:00.397 ****** 2025-10-09 10:25:09.572587 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:09.572599 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:09.572610 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:09.572620 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:25:09.572631 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:25:09.572642 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:25:09.572653 | orchestrator | 2025-10-09 10:25:09.572664 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:25:09.572675 | orchestrator | Thursday 09 October 2025 10:23:51 +0000 (0:00:01.107) 0:00:01.505 ****** 2025-10-09 10:25:09.572686 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-10-09 10:25:09.572697 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-10-09 10:25:09.572708 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-10-09 10:25:09.572719 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-10-09 10:25:09.572730 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-10-09 10:25:09.572741 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-10-09 10:25:09.572752 | orchestrator | 2025-10-09 10:25:09.572763 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-10-09 10:25:09.572773 | orchestrator | 2025-10-09 10:25:09.572784 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-10-09 10:25:09.572795 | orchestrator | Thursday 09 October 2025 10:23:52 +0000 (0:00:01.148) 0:00:02.653 ****** 2025-10-09 10:25:09.572807 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:25:09.572818 | orchestrator | 2025-10-09 10:25:09.572829 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-10-09 10:25:09.572840 | orchestrator | Thursday 09 October 2025 10:23:54 +0000 (0:00:02.260) 0:00:04.914 ****** 2025-10-09 10:25:09.572851 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-10-09 10:25:09.572863 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-10-09 10:25:09.572874 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-10-09 10:25:09.572885 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-10-09 10:25:09.572896 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-10-09 10:25:09.572906 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-10-09 10:25:09.572917 | orchestrator | 2025-10-09 10:25:09.572928 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-10-09 10:25:09.572939 | orchestrator | Thursday 09 October 2025 10:23:56 +0000 (0:00:01.790) 0:00:06.705 ****** 2025-10-09 10:25:09.572950 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-10-09 10:25:09.572961 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-10-09 10:25:09.572972 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-10-09 10:25:09.572983 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-10-09 10:25:09.572994 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-10-09 10:25:09.573004 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-10-09 10:25:09.573015 | orchestrator | 2025-10-09 10:25:09.573026 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-10-09 10:25:09.573037 | orchestrator | Thursday 09 October 2025 10:23:59 +0000 (0:00:02.572) 0:00:09.277 ****** 2025-10-09 10:25:09.573048 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-10-09 10:25:09.573059 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:09.573071 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-10-09 10:25:09.573090 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-10-09 10:25:09.573101 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:09.573112 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-10-09 10:25:09.573123 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:09.573133 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-10-09 10:25:09.573145 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:25:09.573155 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:25:09.573166 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-10-09 10:25:09.573177 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:25:09.573188 | orchestrator | 2025-10-09 10:25:09.573198 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-10-09 10:25:09.573209 | orchestrator | Thursday 09 October 2025 10:24:01 +0000 (0:00:02.606) 0:00:11.884 ****** 2025-10-09 10:25:09.573220 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:09.573231 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:09.573259 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:09.573271 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:25:09.573282 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:25:09.573293 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:25:09.573304 | orchestrator | 2025-10-09 10:25:09.573315 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-10-09 10:25:09.573326 | orchestrator | Thursday 09 October 2025 10:24:02 +0000 (0:00:01.006) 0:00:12.891 ****** 2025-10-09 10:25:09.573360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573401 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573455 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573480 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573499 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573511 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573528 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573541 | orchestrator | 2025-10-09 10:25:09.573552 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-10-09 10:25:09.573567 | orchestrator | Thursday 09 October 2025 10:24:05 +0000 (0:00:02.666) 0:00:15.558 ****** 2025-10-09 10:25:09.573580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573620 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573632 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573662 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573717 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573728 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573751 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573764 | orchestrator | 2025-10-09 10:25:09.573775 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-10-09 10:25:09.573786 | orchestrator | Thursday 09 October 2025 10:24:08 +0000 (0:00:03.412) 0:00:18.970 ****** 2025-10-09 10:25:09.573798 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:09.573809 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:09.573820 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:09.573831 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:25:09.573842 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:25:09.573853 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:25:09.573864 | orchestrator | 2025-10-09 10:25:09.573875 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-10-09 10:25:09.573886 | orchestrator | Thursday 09 October 2025 10:24:10 +0000 (0:00:01.620) 0:00:20.591 ****** 2025-10-09 10:25:09.573897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:25:09.573977 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:25:09.574000 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:25:09.574011 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-10-09 10:25:09.574073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:25:09.574085 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:25:09.574109 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:25:09.574121 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-10-09 10:25:09.574139 | orchestrator | 2025-10-09 10:25:09.574150 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-10-09 10:25:09.574161 | orchestrator | Thursday 09 October 2025 10:24:13 +0000 (0:00:03.423) 0:00:24.015 ****** 2025-10-09 10:25:09.574172 | orchestrator | 2025-10-09 10:25:09.574183 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-10-09 10:25:09.574195 | orchestrator | Thursday 09 October 2025 10:24:14 +0000 (0:00:00.757) 0:00:24.772 ****** 2025-10-09 10:25:09.574205 | orchestrator | 2025-10-09 10:25:09.574216 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-10-09 10:25:09.574227 | orchestrator | Thursday 09 October 2025 10:24:14 +0000 (0:00:00.260) 0:00:25.033 ****** 2025-10-09 10:25:09.574252 | orchestrator | 2025-10-09 10:25:09.574264 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-10-09 10:25:09.574275 | orchestrator | Thursday 09 October 2025 10:24:15 +0000 (0:00:00.277) 0:00:25.311 ****** 2025-10-09 10:25:09.574286 | orchestrator | 2025-10-09 10:25:09.574297 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-10-09 10:25:09.574308 | orchestrator | Thursday 09 October 2025 10:24:15 +0000 (0:00:00.282) 0:00:25.593 ****** 2025-10-09 10:25:09.574319 | orchestrator | 2025-10-09 10:25:09.574330 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-10-09 10:25:09.574341 | orchestrator | Thursday 09 October 2025 10:24:15 +0000 (0:00:00.313) 0:00:25.907 ****** 2025-10-09 10:25:09.574352 | orchestrator | 2025-10-09 10:25:09.574363 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-10-09 10:25:09.574374 | orchestrator | Thursday 09 October 2025 10:24:15 +0000 (0:00:00.177) 0:00:26.085 ****** 2025-10-09 10:25:09.574385 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:09.574396 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:09.574407 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:09.574418 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:25:09.574429 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:25:09.574440 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:25:09.574450 | orchestrator | 2025-10-09 10:25:09.574461 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-10-09 10:25:09.574472 | orchestrator | Thursday 09 October 2025 10:24:26 +0000 (0:00:10.576) 0:00:36.661 ****** 2025-10-09 10:25:09.574483 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:09.574494 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:09.574505 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:09.574516 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:25:09.574527 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:25:09.574538 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:25:09.574549 | orchestrator | 2025-10-09 10:25:09.574560 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-10-09 10:25:09.574571 | orchestrator | Thursday 09 October 2025 10:24:28 +0000 (0:00:02.255) 0:00:38.917 ****** 2025-10-09 10:25:09.574581 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:09.574592 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:09.574603 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:25:09.574614 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:25:09.574625 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:25:09.574636 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:09.574647 | orchestrator | 2025-10-09 10:25:09.574658 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-10-09 10:25:09.574669 | orchestrator | Thursday 09 October 2025 10:24:38 +0000 (0:00:09.502) 0:00:48.420 ****** 2025-10-09 10:25:09.574686 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-10-09 10:25:09.574697 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-10-09 10:25:09.574708 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-10-09 10:25:09.574719 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-10-09 10:25:09.574730 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-10-09 10:25:09.574747 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-10-09 10:25:09.574763 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-10-09 10:25:09.574774 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-10-09 10:25:09.574785 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-10-09 10:25:09.574796 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-10-09 10:25:09.574807 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-10-09 10:25:09.574817 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-10-09 10:25:09.574828 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-10-09 10:25:09.574839 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-10-09 10:25:09.574850 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-10-09 10:25:09.574861 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-10-09 10:25:09.574872 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-10-09 10:25:09.574883 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-10-09 10:25:09.574893 | orchestrator | 2025-10-09 10:25:09.574905 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-10-09 10:25:09.574916 | orchestrator | Thursday 09 October 2025 10:24:48 +0000 (0:00:10.016) 0:00:58.436 ****** 2025-10-09 10:25:09.574926 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-10-09 10:25:09.574938 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:25:09.574949 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-10-09 10:25:09.574959 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:25:09.574970 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-10-09 10:25:09.574981 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:25:09.574992 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-10-09 10:25:09.575003 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-10-09 10:25:09.575014 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-10-09 10:25:09.575025 | orchestrator | 2025-10-09 10:25:09.575036 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-10-09 10:25:09.575047 | orchestrator | Thursday 09 October 2025 10:24:51 +0000 (0:00:02.806) 0:01:01.242 ****** 2025-10-09 10:25:09.575058 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-10-09 10:25:09.575069 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:25:09.575080 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-10-09 10:25:09.575096 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:25:09.575107 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-10-09 10:25:09.575118 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:25:09.575129 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-10-09 10:25:09.575140 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-10-09 10:25:09.575151 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-10-09 10:25:09.575162 | orchestrator | 2025-10-09 10:25:09.575173 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-10-09 10:25:09.575184 | orchestrator | Thursday 09 October 2025 10:24:54 +0000 (0:00:03.749) 0:01:04.992 ****** 2025-10-09 10:25:09.575195 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:09.575206 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:09.575217 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:09.575228 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:25:09.575277 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:25:09.575290 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:25:09.575301 | orchestrator | 2025-10-09 10:25:09.575312 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:25:09.575323 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-09 10:25:09.575335 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-09 10:25:09.575346 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-09 10:25:09.575357 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:25:09.575368 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:25:09.575391 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:25:09.575403 | orchestrator | 2025-10-09 10:25:09.575414 | orchestrator | 2025-10-09 10:25:09.575425 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:25:09.575436 | orchestrator | Thursday 09 October 2025 10:25:05 +0000 (0:00:10.666) 0:01:15.658 ****** 2025-10-09 10:25:09.575447 | orchestrator | =============================================================================== 2025-10-09 10:25:09.575458 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 20.17s 2025-10-09 10:25:09.575469 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.58s 2025-10-09 10:25:09.575480 | orchestrator | openvswitch : Set system-id, hostname and hw-offload ------------------- 10.02s 2025-10-09 10:25:09.575491 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.75s 2025-10-09 10:25:09.575501 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.42s 2025-10-09 10:25:09.575512 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.41s 2025-10-09 10:25:09.575523 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.81s 2025-10-09 10:25:09.575534 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.67s 2025-10-09 10:25:09.575545 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.61s 2025-10-09 10:25:09.575556 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.57s 2025-10-09 10:25:09.575567 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.26s 2025-10-09 10:25:09.575589 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.25s 2025-10-09 10:25:09.575600 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.07s 2025-10-09 10:25:09.575610 | orchestrator | module-load : Load modules ---------------------------------------------- 1.79s 2025-10-09 10:25:09.575621 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.62s 2025-10-09 10:25:09.575632 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.15s 2025-10-09 10:25:09.575643 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.11s 2025-10-09 10:25:09.575654 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.01s 2025-10-09 10:25:12.743635 | orchestrator | 2025-10-09 10:25:12 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:25:12.743727 | orchestrator | 2025-10-09 10:25:12 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:25:12.743741 | orchestrator | 2025-10-09 10:25:12 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:25:12.744676 | orchestrator | 2025-10-09 10:25:12 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:25:12.744699 | orchestrator | 2025-10-09 10:25:12 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:25:12.744712 | orchestrator | 2025-10-09 10:25:12 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:15.773330 | orchestrator | 2025-10-09 10:25:15 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:25:15.773433 | orchestrator | 2025-10-09 10:25:15 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state STARTED 2025-10-09 10:25:15.773924 | orchestrator | 2025-10-09 10:25:15 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:25:15.774451 | orchestrator | 2025-10-09 10:25:15 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:25:15.776367 | orchestrator | 2025-10-09 10:25:15 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:25:15.776390 | orchestrator | 2025-10-09 10:25:15 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:18.815672 | orchestrator | 2025-10-09 10:25:18 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:25:18.819937 | orchestrator | 2025-10-09 10:25:18 | INFO  | Task ede6ebc2-e49e-45c2-886f-04e0c6bb3147 is in state SUCCESS 2025-10-09 10:25:18.821656 | orchestrator | 2025-10-09 10:25:18.821691 | orchestrator | 2025-10-09 10:25:18.821704 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-10-09 10:25:18.821717 | orchestrator | 2025-10-09 10:25:18.821728 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-10-09 10:25:18.821740 | orchestrator | Thursday 09 October 2025 10:21:09 +0000 (0:00:00.232) 0:00:00.232 ****** 2025-10-09 10:25:18.821752 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:25:18.821764 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:25:18.821776 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:25:18.821787 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:18.821798 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:18.821810 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:18.821821 | orchestrator | 2025-10-09 10:25:18.821832 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-10-09 10:25:18.821844 | orchestrator | Thursday 09 October 2025 10:21:10 +0000 (0:00:00.913) 0:00:01.146 ****** 2025-10-09 10:25:18.821855 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:25:18.821867 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:25:18.821879 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:25:18.821890 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.821973 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:18.821988 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:18.821999 | orchestrator | 2025-10-09 10:25:18.822011 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-10-09 10:25:18.822066 | orchestrator | Thursday 09 October 2025 10:21:11 +0000 (0:00:00.781) 0:00:01.928 ****** 2025-10-09 10:25:18.822078 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:25:18.822089 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:25:18.822100 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:25:18.822110 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.822121 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:18.822132 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:18.822143 | orchestrator | 2025-10-09 10:25:18.822154 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-10-09 10:25:18.822165 | orchestrator | Thursday 09 October 2025 10:21:12 +0000 (0:00:00.893) 0:00:02.822 ****** 2025-10-09 10:25:18.822175 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:25:18.822186 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:25:18.822198 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:25:18.822209 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:18.822220 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:18.822231 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:18.822267 | orchestrator | 2025-10-09 10:25:18.822280 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-10-09 10:25:18.822293 | orchestrator | Thursday 09 October 2025 10:21:14 +0000 (0:00:02.486) 0:00:05.308 ****** 2025-10-09 10:25:18.822305 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:25:18.822317 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:25:18.822329 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:25:18.822341 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:18.822353 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:18.822366 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:18.822378 | orchestrator | 2025-10-09 10:25:18.822390 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-10-09 10:25:18.822402 | orchestrator | Thursday 09 October 2025 10:21:17 +0000 (0:00:02.366) 0:00:07.675 ****** 2025-10-09 10:25:18.822415 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:25:18.822427 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:25:18.822439 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:18.822451 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:25:18.822463 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:18.822476 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:18.822488 | orchestrator | 2025-10-09 10:25:18.822501 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-10-09 10:25:18.822513 | orchestrator | Thursday 09 October 2025 10:21:18 +0000 (0:00:00.977) 0:00:08.652 ****** 2025-10-09 10:25:18.822525 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:25:18.822537 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:25:18.822549 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:25:18.822562 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.822574 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:18.822586 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:18.822598 | orchestrator | 2025-10-09 10:25:18.822611 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-10-09 10:25:18.822622 | orchestrator | Thursday 09 October 2025 10:21:19 +0000 (0:00:01.093) 0:00:09.745 ****** 2025-10-09 10:25:18.822633 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:25:18.822645 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:25:18.822656 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:25:18.822667 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.822678 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:18.822689 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:18.822701 | orchestrator | 2025-10-09 10:25:18.822712 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-10-09 10:25:18.822732 | orchestrator | Thursday 09 October 2025 10:21:20 +0000 (0:00:00.890) 0:00:10.636 ****** 2025-10-09 10:25:18.822744 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-09 10:25:18.822755 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-09 10:25:18.822767 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:25:18.822778 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-09 10:25:18.822789 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-09 10:25:18.822801 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:25:18.822812 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-09 10:25:18.822823 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-09 10:25:18.822834 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:25:18.822846 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-09 10:25:18.822872 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-09 10:25:18.822884 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.822895 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-09 10:25:18.822906 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-09 10:25:18.822917 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:18.822928 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-09 10:25:18.822939 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-09 10:25:18.822950 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:18.822961 | orchestrator | 2025-10-09 10:25:18.822972 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-10-09 10:25:18.822983 | orchestrator | Thursday 09 October 2025 10:21:21 +0000 (0:00:01.026) 0:00:11.662 ****** 2025-10-09 10:25:18.822994 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:25:18.823006 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:25:18.823017 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:25:18.823034 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.823045 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:18.823056 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:18.823067 | orchestrator | 2025-10-09 10:25:18.823079 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-10-09 10:25:18.823091 | orchestrator | Thursday 09 October 2025 10:21:22 +0000 (0:00:01.902) 0:00:13.565 ****** 2025-10-09 10:25:18.823103 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:25:18.823114 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:25:18.823125 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:25:18.823136 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:18.823147 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:18.823158 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:18.823169 | orchestrator | 2025-10-09 10:25:18.823180 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-10-09 10:25:18.823191 | orchestrator | Thursday 09 October 2025 10:21:24 +0000 (0:00:01.478) 0:00:15.044 ****** 2025-10-09 10:25:18.823202 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:18.823213 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:18.823224 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:25:18.823235 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:18.823276 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:25:18.823288 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:25:18.823298 | orchestrator | 2025-10-09 10:25:18.823310 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-10-09 10:25:18.823321 | orchestrator | Thursday 09 October 2025 10:21:30 +0000 (0:00:05.638) 0:00:20.683 ****** 2025-10-09 10:25:18.823339 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:25:18.823350 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:25:18.823361 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:25:18.823372 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.823384 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:18.823395 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:18.823405 | orchestrator | 2025-10-09 10:25:18.823417 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-10-09 10:25:18.823428 | orchestrator | Thursday 09 October 2025 10:21:32 +0000 (0:00:02.300) 0:00:22.984 ****** 2025-10-09 10:25:18.823439 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:25:18.823450 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:25:18.823461 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:25:18.823472 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.823483 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:18.823494 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:18.823505 | orchestrator | 2025-10-09 10:25:18.823516 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-10-09 10:25:18.823529 | orchestrator | Thursday 09 October 2025 10:21:36 +0000 (0:00:04.121) 0:00:27.105 ****** 2025-10-09 10:25:18.823540 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:25:18.823551 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:25:18.823562 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:25:18.823573 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:18.823584 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:18.823595 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:18.823606 | orchestrator | 2025-10-09 10:25:18.823618 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-10-09 10:25:18.823629 | orchestrator | Thursday 09 October 2025 10:21:38 +0000 (0:00:01.569) 0:00:28.674 ****** 2025-10-09 10:25:18.823640 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-10-09 10:25:18.823652 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-10-09 10:25:18.823663 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-10-09 10:25:18.823674 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-10-09 10:25:18.823685 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-10-09 10:25:18.823696 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-10-09 10:25:18.823707 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-10-09 10:25:18.823718 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-10-09 10:25:18.823729 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-10-09 10:25:18.823740 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-10-09 10:25:18.823751 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-10-09 10:25:18.823762 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-10-09 10:25:18.823773 | orchestrator | 2025-10-09 10:25:18.823784 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-10-09 10:25:18.823796 | orchestrator | Thursday 09 October 2025 10:21:41 +0000 (0:00:03.160) 0:00:31.835 ****** 2025-10-09 10:25:18.823807 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:18.823818 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:25:18.823829 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:25:18.823840 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:25:18.823851 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:18.823862 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:18.823873 | orchestrator | 2025-10-09 10:25:18.823891 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-10-09 10:25:18.823903 | orchestrator | 2025-10-09 10:25:18.823915 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-10-09 10:25:18.823926 | orchestrator | Thursday 09 October 2025 10:21:43 +0000 (0:00:02.593) 0:00:34.428 ****** 2025-10-09 10:25:18.823943 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:18.823954 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:18.823965 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:18.823976 | orchestrator | 2025-10-09 10:25:18.823987 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-10-09 10:25:18.823999 | orchestrator | Thursday 09 October 2025 10:21:45 +0000 (0:00:01.532) 0:00:35.961 ****** 2025-10-09 10:25:18.824010 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:18.824021 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:18.824032 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:18.824043 | orchestrator | 2025-10-09 10:25:18.824054 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-10-09 10:25:18.824065 | orchestrator | Thursday 09 October 2025 10:21:46 +0000 (0:00:01.102) 0:00:37.063 ****** 2025-10-09 10:25:18.824076 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:18.824092 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:18.824103 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:18.824114 | orchestrator | 2025-10-09 10:25:18.824125 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-10-09 10:25:18.824137 | orchestrator | Thursday 09 October 2025 10:21:47 +0000 (0:00:01.129) 0:00:38.193 ****** 2025-10-09 10:25:18.824148 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:18.824159 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:18.824170 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:18.824181 | orchestrator | 2025-10-09 10:25:18.824192 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-10-09 10:25:18.824203 | orchestrator | Thursday 09 October 2025 10:21:48 +0000 (0:00:01.039) 0:00:39.233 ****** 2025-10-09 10:25:18.824214 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.824226 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:18.824237 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:18.824299 | orchestrator | 2025-10-09 10:25:18.824311 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-10-09 10:25:18.824322 | orchestrator | Thursday 09 October 2025 10:21:49 +0000 (0:00:00.601) 0:00:39.834 ****** 2025-10-09 10:25:18.824334 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:18.824460 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:18.824470 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:18.824481 | orchestrator | 2025-10-09 10:25:18.824492 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-10-09 10:25:18.824503 | orchestrator | Thursday 09 October 2025 10:21:50 +0000 (0:00:01.018) 0:00:40.853 ****** 2025-10-09 10:25:18.824514 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:18.824525 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:18.824535 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:18.824546 | orchestrator | 2025-10-09 10:25:18.824557 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-10-09 10:25:18.824568 | orchestrator | Thursday 09 October 2025 10:21:51 +0000 (0:00:01.625) 0:00:42.479 ****** 2025-10-09 10:25:18.824579 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:25:18.824590 | orchestrator | 2025-10-09 10:25:18.824600 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-10-09 10:25:18.824611 | orchestrator | Thursday 09 October 2025 10:21:52 +0000 (0:00:00.794) 0:00:43.273 ****** 2025-10-09 10:25:18.824622 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:18.824632 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:18.824643 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:18.824654 | orchestrator | 2025-10-09 10:25:18.824664 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-10-09 10:25:18.824675 | orchestrator | Thursday 09 October 2025 10:21:54 +0000 (0:00:02.276) 0:00:45.550 ****** 2025-10-09 10:25:18.824686 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:18.824697 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:18.824707 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:18.824725 | orchestrator | 2025-10-09 10:25:18.824735 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-10-09 10:25:18.824745 | orchestrator | Thursday 09 October 2025 10:21:55 +0000 (0:00:00.574) 0:00:46.124 ****** 2025-10-09 10:25:18.824754 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:18.824764 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:18.824773 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:18.824783 | orchestrator | 2025-10-09 10:25:18.824792 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-10-09 10:25:18.824802 | orchestrator | Thursday 09 October 2025 10:21:56 +0000 (0:00:00.893) 0:00:47.018 ****** 2025-10-09 10:25:18.824811 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:18.824821 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:18.824830 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:18.824840 | orchestrator | 2025-10-09 10:25:18.824849 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-10-09 10:25:18.824859 | orchestrator | Thursday 09 October 2025 10:21:58 +0000 (0:00:02.095) 0:00:49.113 ****** 2025-10-09 10:25:18.824868 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.824878 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:18.824888 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:18.824897 | orchestrator | 2025-10-09 10:25:18.824907 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-10-09 10:25:18.824916 | orchestrator | Thursday 09 October 2025 10:21:58 +0000 (0:00:00.469) 0:00:49.582 ****** 2025-10-09 10:25:18.824926 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.824936 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:18.824945 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:18.824955 | orchestrator | 2025-10-09 10:25:18.824964 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-10-09 10:25:18.824973 | orchestrator | Thursday 09 October 2025 10:21:59 +0000 (0:00:00.488) 0:00:50.071 ****** 2025-10-09 10:25:18.824983 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:18.824993 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:18.825002 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:18.825012 | orchestrator | 2025-10-09 10:25:18.825029 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-10-09 10:25:18.825039 | orchestrator | Thursday 09 October 2025 10:22:02 +0000 (0:00:03.394) 0:00:53.466 ****** 2025-10-09 10:25:18.825049 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-10-09 10:25:18.825059 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-10-09 10:25:18.825069 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-10-09 10:25:18.825081 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-10-09 10:25:18.825092 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-10-09 10:25:18.825102 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-10-09 10:25:18.825113 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-10-09 10:25:18.825124 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-10-09 10:25:18.825134 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-10-09 10:25:18.825152 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-10-09 10:25:18.825163 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-10-09 10:25:18.825174 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-10-09 10:25:18.825185 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-10-09 10:25:18.825891 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-10-09 10:25:18.825921 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:18.825931 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:18.825942 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:18.825952 | orchestrator | 2025-10-09 10:25:18.825963 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-10-09 10:25:18.825973 | orchestrator | Thursday 09 October 2025 10:22:58 +0000 (0:00:55.358) 0:01:48.824 ****** 2025-10-09 10:25:18.825984 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.825995 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:18.826010 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:18.826074 | orchestrator | 2025-10-09 10:25:18.826086 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-10-09 10:25:18.826096 | orchestrator | Thursday 09 October 2025 10:22:58 +0000 (0:00:00.340) 0:01:49.165 ****** 2025-10-09 10:25:18.826106 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:18.826117 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:18.826127 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:18.826139 | orchestrator | 2025-10-09 10:25:18.826150 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-10-09 10:25:18.826161 | orchestrator | Thursday 09 October 2025 10:22:59 +0000 (0:00:01.171) 0:01:50.337 ****** 2025-10-09 10:25:18.826173 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:18.826184 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:18.826195 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:18.826207 | orchestrator | 2025-10-09 10:25:18.826320 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-10-09 10:25:18.826335 | orchestrator | Thursday 09 October 2025 10:23:01 +0000 (0:00:01.856) 0:01:52.193 ****** 2025-10-09 10:25:18.826346 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:18.826357 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:18.826369 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:18.826380 | orchestrator | 2025-10-09 10:25:18.826391 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-10-09 10:25:18.826403 | orchestrator | Thursday 09 October 2025 10:23:27 +0000 (0:00:26.197) 0:02:18.391 ****** 2025-10-09 10:25:18.826414 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:18.826424 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:18.826434 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:18.826444 | orchestrator | 2025-10-09 10:25:18.826455 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-10-09 10:25:18.826465 | orchestrator | Thursday 09 October 2025 10:23:28 +0000 (0:00:00.617) 0:02:19.009 ****** 2025-10-09 10:25:18.826475 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:18.826485 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:18.826495 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:18.826505 | orchestrator | 2025-10-09 10:25:18.826515 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-10-09 10:25:18.826526 | orchestrator | Thursday 09 October 2025 10:23:29 +0000 (0:00:00.662) 0:02:19.671 ****** 2025-10-09 10:25:18.826548 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:18.826558 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:18.826579 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:18.826590 | orchestrator | 2025-10-09 10:25:18.826600 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-10-09 10:25:18.826610 | orchestrator | Thursday 09 October 2025 10:23:29 +0000 (0:00:00.638) 0:02:20.310 ****** 2025-10-09 10:25:18.826620 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:18.826630 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:18.826640 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:18.826650 | orchestrator | 2025-10-09 10:25:18.826660 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-10-09 10:25:18.826671 | orchestrator | Thursday 09 October 2025 10:23:30 +0000 (0:00:01.083) 0:02:21.393 ****** 2025-10-09 10:25:18.826681 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:18.826691 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:18.826701 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:18.826711 | orchestrator | 2025-10-09 10:25:18.826721 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-10-09 10:25:18.826730 | orchestrator | Thursday 09 October 2025 10:23:31 +0000 (0:00:00.341) 0:02:21.735 ****** 2025-10-09 10:25:18.826740 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:18.826750 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:18.826760 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:18.826771 | orchestrator | 2025-10-09 10:25:18.826781 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-10-09 10:25:18.826791 | orchestrator | Thursday 09 October 2025 10:23:31 +0000 (0:00:00.731) 0:02:22.466 ****** 2025-10-09 10:25:18.826801 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:18.826811 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:18.826821 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:18.826830 | orchestrator | 2025-10-09 10:25:18.826841 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-10-09 10:25:18.826851 | orchestrator | Thursday 09 October 2025 10:23:32 +0000 (0:00:00.638) 0:02:23.105 ****** 2025-10-09 10:25:18.826861 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:18.826871 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:18.826881 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:18.826891 | orchestrator | 2025-10-09 10:25:18.826901 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-10-09 10:25:18.826911 | orchestrator | Thursday 09 October 2025 10:23:33 +0000 (0:00:01.169) 0:02:24.274 ****** 2025-10-09 10:25:18.826920 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:25:18.826931 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:25:18.826940 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:25:18.826950 | orchestrator | 2025-10-09 10:25:18.826960 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-10-09 10:25:18.826970 | orchestrator | Thursday 09 October 2025 10:23:34 +0000 (0:00:01.000) 0:02:25.274 ****** 2025-10-09 10:25:18.826980 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.826991 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:18.827001 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:18.827011 | orchestrator | 2025-10-09 10:25:18.827021 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-10-09 10:25:18.827031 | orchestrator | Thursday 09 October 2025 10:23:35 +0000 (0:00:00.345) 0:02:25.620 ****** 2025-10-09 10:25:18.827041 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.827051 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:18.827061 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:18.827167 | orchestrator | 2025-10-09 10:25:18.827180 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-10-09 10:25:18.827192 | orchestrator | Thursday 09 October 2025 10:23:35 +0000 (0:00:00.331) 0:02:25.951 ****** 2025-10-09 10:25:18.827208 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:18.827224 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:18.827302 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:18.827321 | orchestrator | 2025-10-09 10:25:18.827345 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-10-09 10:25:18.827372 | orchestrator | Thursday 09 October 2025 10:23:36 +0000 (0:00:00.956) 0:02:26.908 ****** 2025-10-09 10:25:18.827389 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:18.827404 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:18.827422 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:18.827437 | orchestrator | 2025-10-09 10:25:18.827454 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-10-09 10:25:18.827467 | orchestrator | Thursday 09 October 2025 10:23:36 +0000 (0:00:00.627) 0:02:27.535 ****** 2025-10-09 10:25:18.827476 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-10-09 10:25:18.827486 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-10-09 10:25:18.827496 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-10-09 10:25:18.827505 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-10-09 10:25:18.827515 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-10-09 10:25:18.827525 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-10-09 10:25:18.827534 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-10-09 10:25:18.827544 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-10-09 10:25:18.827554 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-10-09 10:25:18.827564 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-10-09 10:25:18.827573 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-10-09 10:25:18.827593 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-10-09 10:25:18.827601 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-10-09 10:25:18.827609 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-10-09 10:25:18.827617 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-10-09 10:25:18.827625 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-10-09 10:25:18.827633 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-10-09 10:25:18.827641 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-10-09 10:25:18.827648 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-10-09 10:25:18.827656 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-10-09 10:25:18.827664 | orchestrator | 2025-10-09 10:25:18.827672 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-10-09 10:25:18.827680 | orchestrator | 2025-10-09 10:25:18.827688 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-10-09 10:25:18.827695 | orchestrator | Thursday 09 October 2025 10:23:40 +0000 (0:00:03.096) 0:02:30.632 ****** 2025-10-09 10:25:18.827703 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:25:18.827711 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:25:18.827719 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:25:18.827727 | orchestrator | 2025-10-09 10:25:18.827735 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-10-09 10:25:18.827743 | orchestrator | Thursday 09 October 2025 10:23:40 +0000 (0:00:00.836) 0:02:31.469 ****** 2025-10-09 10:25:18.827757 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:25:18.827765 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:25:18.827772 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:25:18.827780 | orchestrator | 2025-10-09 10:25:18.827788 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-10-09 10:25:18.827796 | orchestrator | Thursday 09 October 2025 10:23:41 +0000 (0:00:00.645) 0:02:32.114 ****** 2025-10-09 10:25:18.827805 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:25:18.827814 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:25:18.827822 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:25:18.827831 | orchestrator | 2025-10-09 10:25:18.827840 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-10-09 10:25:18.827849 | orchestrator | Thursday 09 October 2025 10:23:41 +0000 (0:00:00.339) 0:02:32.453 ****** 2025-10-09 10:25:18.827858 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:25:18.827867 | orchestrator | 2025-10-09 10:25:18.827876 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-10-09 10:25:18.827885 | orchestrator | Thursday 09 October 2025 10:23:42 +0000 (0:00:00.700) 0:02:33.154 ****** 2025-10-09 10:25:18.827894 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:25:18.827903 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:25:18.827912 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:25:18.827920 | orchestrator | 2025-10-09 10:25:18.827929 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-10-09 10:25:18.827938 | orchestrator | Thursday 09 October 2025 10:23:42 +0000 (0:00:00.355) 0:02:33.509 ****** 2025-10-09 10:25:18.827947 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:25:18.827955 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:25:18.827969 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:25:18.827978 | orchestrator | 2025-10-09 10:25:18.827987 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-10-09 10:25:18.827996 | orchestrator | Thursday 09 October 2025 10:23:43 +0000 (0:00:00.396) 0:02:33.906 ****** 2025-10-09 10:25:18.828005 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:25:18.828014 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:25:18.828022 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:25:18.828031 | orchestrator | 2025-10-09 10:25:18.828040 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-10-09 10:25:18.828049 | orchestrator | Thursday 09 October 2025 10:23:43 +0000 (0:00:00.343) 0:02:34.249 ****** 2025-10-09 10:25:18.828058 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:25:18.828067 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:25:18.828076 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:25:18.828085 | orchestrator | 2025-10-09 10:25:18.828094 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-10-09 10:25:18.828102 | orchestrator | Thursday 09 October 2025 10:23:44 +0000 (0:00:01.063) 0:02:35.313 ****** 2025-10-09 10:25:18.828111 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:25:18.828120 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:25:18.828129 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:25:18.828138 | orchestrator | 2025-10-09 10:25:18.828147 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-10-09 10:25:18.828156 | orchestrator | Thursday 09 October 2025 10:23:45 +0000 (0:00:01.185) 0:02:36.499 ****** 2025-10-09 10:25:18.828165 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:25:18.828174 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:25:18.828183 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:25:18.828191 | orchestrator | 2025-10-09 10:25:18.828199 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-10-09 10:25:18.828206 | orchestrator | Thursday 09 October 2025 10:23:47 +0000 (0:00:01.509) 0:02:38.009 ****** 2025-10-09 10:25:18.828214 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:25:18.828222 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:25:18.828230 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:25:18.828258 | orchestrator | 2025-10-09 10:25:18.828267 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-10-09 10:25:18.828275 | orchestrator | 2025-10-09 10:25:18.828287 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-10-09 10:25:18.828295 | orchestrator | Thursday 09 October 2025 10:23:59 +0000 (0:00:12.403) 0:02:50.413 ****** 2025-10-09 10:25:18.828303 | orchestrator | ok: [testbed-manager] 2025-10-09 10:25:18.828311 | orchestrator | 2025-10-09 10:25:18.828319 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-10-09 10:25:18.828327 | orchestrator | Thursday 09 October 2025 10:24:00 +0000 (0:00:00.959) 0:02:51.373 ****** 2025-10-09 10:25:18.828334 | orchestrator | changed: [testbed-manager] 2025-10-09 10:25:18.828342 | orchestrator | 2025-10-09 10:25:18.828350 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-10-09 10:25:18.828358 | orchestrator | Thursday 09 October 2025 10:24:01 +0000 (0:00:00.569) 0:02:51.942 ****** 2025-10-09 10:25:18.828366 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-10-09 10:25:18.828374 | orchestrator | 2025-10-09 10:25:18.828381 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-10-09 10:25:18.828389 | orchestrator | Thursday 09 October 2025 10:24:01 +0000 (0:00:00.560) 0:02:52.503 ****** 2025-10-09 10:25:18.828397 | orchestrator | changed: [testbed-manager] 2025-10-09 10:25:18.828404 | orchestrator | 2025-10-09 10:25:18.828412 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-10-09 10:25:18.828420 | orchestrator | Thursday 09 October 2025 10:24:02 +0000 (0:00:00.907) 0:02:53.411 ****** 2025-10-09 10:25:18.828427 | orchestrator | changed: [testbed-manager] 2025-10-09 10:25:18.828435 | orchestrator | 2025-10-09 10:25:18.828443 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-10-09 10:25:18.828451 | orchestrator | Thursday 09 October 2025 10:24:03 +0000 (0:00:00.564) 0:02:53.976 ****** 2025-10-09 10:25:18.828458 | orchestrator | changed: [testbed-manager -> localhost] 2025-10-09 10:25:18.828466 | orchestrator | 2025-10-09 10:25:18.828474 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-10-09 10:25:18.828482 | orchestrator | Thursday 09 October 2025 10:24:05 +0000 (0:00:01.787) 0:02:55.763 ****** 2025-10-09 10:25:18.828490 | orchestrator | changed: [testbed-manager -> localhost] 2025-10-09 10:25:18.828497 | orchestrator | 2025-10-09 10:25:18.828505 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-10-09 10:25:18.828513 | orchestrator | Thursday 09 October 2025 10:24:05 +0000 (0:00:00.718) 0:02:56.481 ****** 2025-10-09 10:25:18.828521 | orchestrator | changed: [testbed-manager] 2025-10-09 10:25:18.828528 | orchestrator | 2025-10-09 10:25:18.828536 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-10-09 10:25:18.828544 | orchestrator | Thursday 09 October 2025 10:24:06 +0000 (0:00:00.579) 0:02:57.061 ****** 2025-10-09 10:25:18.828552 | orchestrator | changed: [testbed-manager] 2025-10-09 10:25:18.828559 | orchestrator | 2025-10-09 10:25:18.828567 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-10-09 10:25:18.828575 | orchestrator | 2025-10-09 10:25:18.828583 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-10-09 10:25:18.828590 | orchestrator | Thursday 09 October 2025 10:24:06 +0000 (0:00:00.419) 0:02:57.480 ****** 2025-10-09 10:25:18.828598 | orchestrator | ok: [testbed-manager] 2025-10-09 10:25:18.828606 | orchestrator | 2025-10-09 10:25:18.828614 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-10-09 10:25:18.828621 | orchestrator | Thursday 09 October 2025 10:24:07 +0000 (0:00:00.143) 0:02:57.624 ****** 2025-10-09 10:25:18.828629 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-10-09 10:25:18.828637 | orchestrator | 2025-10-09 10:25:18.828645 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-10-09 10:25:18.828652 | orchestrator | Thursday 09 October 2025 10:24:07 +0000 (0:00:00.200) 0:02:57.825 ****** 2025-10-09 10:25:18.828667 | orchestrator | ok: [testbed-manager] 2025-10-09 10:25:18.828675 | orchestrator | 2025-10-09 10:25:18.828690 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-10-09 10:25:18.828698 | orchestrator | Thursday 09 October 2025 10:24:08 +0000 (0:00:00.877) 0:02:58.702 ****** 2025-10-09 10:25:18.828706 | orchestrator | ok: [testbed-manager] 2025-10-09 10:25:18.828714 | orchestrator | 2025-10-09 10:25:18.828722 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-10-09 10:25:18.828729 | orchestrator | Thursday 09 October 2025 10:24:10 +0000 (0:00:02.202) 0:03:00.904 ****** 2025-10-09 10:25:18.828737 | orchestrator | changed: [testbed-manager] 2025-10-09 10:25:18.828745 | orchestrator | 2025-10-09 10:25:18.828752 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-10-09 10:25:18.828760 | orchestrator | Thursday 09 October 2025 10:24:11 +0000 (0:00:00.915) 0:03:01.819 ****** 2025-10-09 10:25:18.828768 | orchestrator | ok: [testbed-manager] 2025-10-09 10:25:18.828776 | orchestrator | 2025-10-09 10:25:18.828783 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-10-09 10:25:18.828791 | orchestrator | Thursday 09 October 2025 10:24:11 +0000 (0:00:00.494) 0:03:02.314 ****** 2025-10-09 10:25:18.828799 | orchestrator | changed: [testbed-manager] 2025-10-09 10:25:18.828807 | orchestrator | 2025-10-09 10:25:18.828814 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-10-09 10:25:18.828822 | orchestrator | Thursday 09 October 2025 10:24:23 +0000 (0:00:11.833) 0:03:14.148 ****** 2025-10-09 10:25:18.828830 | orchestrator | changed: [testbed-manager] 2025-10-09 10:25:18.828838 | orchestrator | 2025-10-09 10:25:18.828846 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-10-09 10:25:18.828853 | orchestrator | Thursday 09 October 2025 10:24:40 +0000 (0:00:16.806) 0:03:30.954 ****** 2025-10-09 10:25:18.828861 | orchestrator | ok: [testbed-manager] 2025-10-09 10:25:18.828869 | orchestrator | 2025-10-09 10:25:18.828877 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-10-09 10:25:18.828884 | orchestrator | 2025-10-09 10:25:18.828892 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-10-09 10:25:18.828900 | orchestrator | Thursday 09 October 2025 10:24:41 +0000 (0:00:00.834) 0:03:31.788 ****** 2025-10-09 10:25:18.828908 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:18.828916 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:18.828923 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:18.828931 | orchestrator | 2025-10-09 10:25:18.828943 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-10-09 10:25:18.828951 | orchestrator | Thursday 09 October 2025 10:24:41 +0000 (0:00:00.427) 0:03:32.216 ****** 2025-10-09 10:25:18.828959 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.828967 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:18.828974 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:18.828982 | orchestrator | 2025-10-09 10:25:18.828990 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-10-09 10:25:18.828998 | orchestrator | Thursday 09 October 2025 10:24:42 +0000 (0:00:00.412) 0:03:32.628 ****** 2025-10-09 10:25:18.829006 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:25:18.829013 | orchestrator | 2025-10-09 10:25:18.829021 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-10-09 10:25:18.829029 | orchestrator | Thursday 09 October 2025 10:24:42 +0000 (0:00:00.898) 0:03:33.527 ****** 2025-10-09 10:25:18.829037 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829044 | orchestrator | 2025-10-09 10:25:18.829052 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-10-09 10:25:18.829060 | orchestrator | Thursday 09 October 2025 10:24:43 +0000 (0:00:00.642) 0:03:34.170 ****** 2025-10-09 10:25:18.829068 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829084 | orchestrator | 2025-10-09 10:25:18.829092 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-10-09 10:25:18.829100 | orchestrator | Thursday 09 October 2025 10:24:43 +0000 (0:00:00.246) 0:03:34.416 ****** 2025-10-09 10:25:18.829107 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829115 | orchestrator | 2025-10-09 10:25:18.829123 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-10-09 10:25:18.829130 | orchestrator | Thursday 09 October 2025 10:24:44 +0000 (0:00:00.221) 0:03:34.638 ****** 2025-10-09 10:25:18.829138 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829146 | orchestrator | 2025-10-09 10:25:18.829153 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-10-09 10:25:18.829161 | orchestrator | Thursday 09 October 2025 10:24:44 +0000 (0:00:00.226) 0:03:34.865 ****** 2025-10-09 10:25:18.829169 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829177 | orchestrator | 2025-10-09 10:25:18.829185 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-10-09 10:25:18.829192 | orchestrator | Thursday 09 October 2025 10:24:44 +0000 (0:00:00.242) 0:03:35.107 ****** 2025-10-09 10:25:18.829200 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829208 | orchestrator | 2025-10-09 10:25:18.829215 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-10-09 10:25:18.829223 | orchestrator | Thursday 09 October 2025 10:24:44 +0000 (0:00:00.317) 0:03:35.425 ****** 2025-10-09 10:25:18.829231 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829251 | orchestrator | 2025-10-09 10:25:18.829259 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-10-09 10:25:18.829267 | orchestrator | Thursday 09 October 2025 10:24:45 +0000 (0:00:00.352) 0:03:35.777 ****** 2025-10-09 10:25:18.829275 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829283 | orchestrator | 2025-10-09 10:25:18.829290 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-10-09 10:25:18.829298 | orchestrator | Thursday 09 October 2025 10:24:45 +0000 (0:00:00.396) 0:03:36.173 ****** 2025-10-09 10:25:18.829306 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829313 | orchestrator | 2025-10-09 10:25:18.829321 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-10-09 10:25:18.829329 | orchestrator | Thursday 09 October 2025 10:24:46 +0000 (0:00:00.788) 0:03:36.961 ****** 2025-10-09 10:25:18.829341 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-10-09 10:25:18.829349 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-10-09 10:25:18.829356 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829364 | orchestrator | 2025-10-09 10:25:18.829372 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-10-09 10:25:18.829380 | orchestrator | Thursday 09 October 2025 10:24:46 +0000 (0:00:00.351) 0:03:37.313 ****** 2025-10-09 10:25:18.829387 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829395 | orchestrator | 2025-10-09 10:25:18.829403 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-10-09 10:25:18.829411 | orchestrator | Thursday 09 October 2025 10:24:46 +0000 (0:00:00.241) 0:03:37.554 ****** 2025-10-09 10:25:18.829418 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829426 | orchestrator | 2025-10-09 10:25:18.829434 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-10-09 10:25:18.829441 | orchestrator | Thursday 09 October 2025 10:24:47 +0000 (0:00:00.239) 0:03:37.793 ****** 2025-10-09 10:25:18.829449 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829457 | orchestrator | 2025-10-09 10:25:18.829465 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-10-09 10:25:18.829472 | orchestrator | Thursday 09 October 2025 10:24:47 +0000 (0:00:00.265) 0:03:38.058 ****** 2025-10-09 10:25:18.829480 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829488 | orchestrator | 2025-10-09 10:25:18.829495 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-10-09 10:25:18.829508 | orchestrator | Thursday 09 October 2025 10:24:47 +0000 (0:00:00.214) 0:03:38.273 ****** 2025-10-09 10:25:18.829516 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829524 | orchestrator | 2025-10-09 10:25:18.829532 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-10-09 10:25:18.829539 | orchestrator | Thursday 09 October 2025 10:24:47 +0000 (0:00:00.304) 0:03:38.577 ****** 2025-10-09 10:25:18.829547 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829555 | orchestrator | 2025-10-09 10:25:18.829563 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-10-09 10:25:18.829571 | orchestrator | Thursday 09 October 2025 10:24:48 +0000 (0:00:00.247) 0:03:38.825 ****** 2025-10-09 10:25:18.829578 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829586 | orchestrator | 2025-10-09 10:25:18.829598 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-10-09 10:25:18.829606 | orchestrator | Thursday 09 October 2025 10:24:48 +0000 (0:00:00.229) 0:03:39.054 ****** 2025-10-09 10:25:18.829614 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829622 | orchestrator | 2025-10-09 10:25:18.829630 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-10-09 10:25:18.829637 | orchestrator | Thursday 09 October 2025 10:24:48 +0000 (0:00:00.256) 0:03:39.311 ****** 2025-10-09 10:25:18.829645 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829653 | orchestrator | 2025-10-09 10:25:18.829661 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-10-09 10:25:18.829668 | orchestrator | Thursday 09 October 2025 10:24:48 +0000 (0:00:00.226) 0:03:39.537 ****** 2025-10-09 10:25:18.829676 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829684 | orchestrator | 2025-10-09 10:25:18.829691 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-10-09 10:25:18.829699 | orchestrator | Thursday 09 October 2025 10:24:49 +0000 (0:00:00.269) 0:03:39.807 ****** 2025-10-09 10:25:18.829707 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829714 | orchestrator | 2025-10-09 10:25:18.829722 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-10-09 10:25:18.829730 | orchestrator | Thursday 09 October 2025 10:24:50 +0000 (0:00:00.921) 0:03:40.729 ****** 2025-10-09 10:25:18.829738 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-10-09 10:25:18.829745 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-10-09 10:25:18.829753 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-10-09 10:25:18.829761 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-10-09 10:25:18.829768 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829776 | orchestrator | 2025-10-09 10:25:18.829784 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-10-09 10:25:18.829792 | orchestrator | Thursday 09 October 2025 10:24:50 +0000 (0:00:00.598) 0:03:41.327 ****** 2025-10-09 10:25:18.829799 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829807 | orchestrator | 2025-10-09 10:25:18.829815 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-10-09 10:25:18.829822 | orchestrator | Thursday 09 October 2025 10:24:50 +0000 (0:00:00.206) 0:03:41.534 ****** 2025-10-09 10:25:18.829830 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829838 | orchestrator | 2025-10-09 10:25:18.829846 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-10-09 10:25:18.829853 | orchestrator | Thursday 09 October 2025 10:24:51 +0000 (0:00:00.267) 0:03:41.802 ****** 2025-10-09 10:25:18.829861 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829869 | orchestrator | 2025-10-09 10:25:18.829877 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-10-09 10:25:18.829884 | orchestrator | Thursday 09 October 2025 10:24:51 +0000 (0:00:00.197) 0:03:41.999 ****** 2025-10-09 10:25:18.829892 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829905 | orchestrator | 2025-10-09 10:25:18.829913 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-10-09 10:25:18.829921 | orchestrator | Thursday 09 October 2025 10:24:51 +0000 (0:00:00.208) 0:03:42.208 ****** 2025-10-09 10:25:18.829929 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-10-09 10:25:18.829936 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-10-09 10:25:18.829944 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829952 | orchestrator | 2025-10-09 10:25:18.829960 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-10-09 10:25:18.829971 | orchestrator | Thursday 09 October 2025 10:24:51 +0000 (0:00:00.335) 0:03:42.543 ****** 2025-10-09 10:25:18.829979 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.829987 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:18.829994 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:18.830002 | orchestrator | 2025-10-09 10:25:18.830010 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-10-09 10:25:18.830041 | orchestrator | Thursday 09 October 2025 10:24:52 +0000 (0:00:00.353) 0:03:42.897 ****** 2025-10-09 10:25:18.830051 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:18.830059 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:18.830067 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:18.830075 | orchestrator | 2025-10-09 10:25:18.830082 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-10-09 10:25:18.830090 | orchestrator | 2025-10-09 10:25:18.830098 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-10-09 10:25:18.830106 | orchestrator | Thursday 09 October 2025 10:24:53 +0000 (0:00:01.294) 0:03:44.192 ****** 2025-10-09 10:25:18.830114 | orchestrator | ok: [testbed-manager] 2025-10-09 10:25:18.830121 | orchestrator | 2025-10-09 10:25:18.830129 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-10-09 10:25:18.830137 | orchestrator | Thursday 09 October 2025 10:24:53 +0000 (0:00:00.164) 0:03:44.356 ****** 2025-10-09 10:25:18.830145 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-10-09 10:25:18.830153 | orchestrator | 2025-10-09 10:25:18.830160 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-10-09 10:25:18.830168 | orchestrator | Thursday 09 October 2025 10:24:53 +0000 (0:00:00.242) 0:03:44.599 ****** 2025-10-09 10:25:18.830176 | orchestrator | changed: [testbed-manager] 2025-10-09 10:25:18.830184 | orchestrator | 2025-10-09 10:25:18.830192 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-10-09 10:25:18.830199 | orchestrator | 2025-10-09 10:25:18.830207 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-10-09 10:25:18.830215 | orchestrator | Thursday 09 October 2025 10:25:00 +0000 (0:00:06.428) 0:03:51.027 ****** 2025-10-09 10:25:18.830223 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:25:18.830231 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:25:18.830273 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:25:18.830287 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:25:18.830296 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:25:18.830304 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:25:18.830312 | orchestrator | 2025-10-09 10:25:18.830320 | orchestrator | TASK [Manage labels] *********************************************************** 2025-10-09 10:25:18.830328 | orchestrator | Thursday 09 October 2025 10:25:01 +0000 (0:00:00.993) 0:03:52.021 ****** 2025-10-09 10:25:18.830336 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-10-09 10:25:18.830343 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-10-09 10:25:18.830351 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-10-09 10:25:18.830359 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-10-09 10:25:18.830373 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-10-09 10:25:18.830381 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-10-09 10:25:18.830389 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-10-09 10:25:18.830397 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-10-09 10:25:18.830405 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-10-09 10:25:18.830413 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-10-09 10:25:18.830421 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-10-09 10:25:18.830428 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-10-09 10:25:18.830436 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-10-09 10:25:18.830444 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-10-09 10:25:18.830452 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-10-09 10:25:18.830460 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-10-09 10:25:18.830468 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-10-09 10:25:18.830475 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-10-09 10:25:18.830483 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-10-09 10:25:18.830491 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-10-09 10:25:18.830499 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-10-09 10:25:18.830507 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-10-09 10:25:18.830515 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-10-09 10:25:18.830523 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-10-09 10:25:18.830531 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-10-09 10:25:18.830543 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-10-09 10:25:18.830551 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-10-09 10:25:18.830559 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-10-09 10:25:18.830567 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-10-09 10:25:18.830575 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-10-09 10:25:18.830583 | orchestrator | 2025-10-09 10:25:18.830591 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-10-09 10:25:18.830598 | orchestrator | Thursday 09 October 2025 10:25:14 +0000 (0:00:13.576) 0:04:05.597 ****** 2025-10-09 10:25:18.830606 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:25:18.830614 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:25:18.830622 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:25:18.830630 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.830638 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:18.830645 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:18.830653 | orchestrator | 2025-10-09 10:25:18.830661 | orchestrator | TASK [Manage taints] *********************************************************** 2025-10-09 10:25:18.830669 | orchestrator | Thursday 09 October 2025 10:25:15 +0000 (0:00:00.845) 0:04:06.443 ****** 2025-10-09 10:25:18.830677 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:25:18.830685 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:25:18.830698 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:25:18.830706 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:25:18.830714 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:25:18.830722 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:25:18.830730 | orchestrator | 2025-10-09 10:25:18.830736 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:25:18.830743 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:25:18.830755 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-10-09 10:25:18.830763 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-10-09 10:25:18.830770 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-10-09 10:25:18.830777 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-10-09 10:25:18.830783 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-10-09 10:25:18.830790 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-10-09 10:25:18.830797 | orchestrator | 2025-10-09 10:25:18.830804 | orchestrator | 2025-10-09 10:25:18.830811 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:25:18.830818 | orchestrator | Thursday 09 October 2025 10:25:16 +0000 (0:00:00.490) 0:04:06.933 ****** 2025-10-09 10:25:18.830824 | orchestrator | =============================================================================== 2025-10-09 10:25:18.830831 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.36s 2025-10-09 10:25:18.830838 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.20s 2025-10-09 10:25:18.830844 | orchestrator | kubectl : Install required packages ------------------------------------ 16.81s 2025-10-09 10:25:18.830851 | orchestrator | Manage labels ---------------------------------------------------------- 13.58s 2025-10-09 10:25:18.830858 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.40s 2025-10-09 10:25:18.830865 | orchestrator | kubectl : Add repository Debian ---------------------------------------- 11.83s 2025-10-09 10:25:18.830872 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.43s 2025-10-09 10:25:18.830878 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.64s 2025-10-09 10:25:18.830885 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 4.12s 2025-10-09 10:25:18.830892 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 3.39s 2025-10-09 10:25:18.830898 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 3.16s 2025-10-09 10:25:18.830905 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.10s 2025-10-09 10:25:18.830912 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.59s 2025-10-09 10:25:18.830919 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.49s 2025-10-09 10:25:18.830926 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.37s 2025-10-09 10:25:18.830932 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 2.30s 2025-10-09 10:25:18.831060 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.28s 2025-10-09 10:25:18.831074 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.20s 2025-10-09 10:25:18.831080 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.10s 2025-10-09 10:25:18.831087 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.90s 2025-10-09 10:25:18.831094 | orchestrator | 2025-10-09 10:25:18 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:25:18.831101 | orchestrator | 2025-10-09 10:25:18 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:25:18.831107 | orchestrator | 2025-10-09 10:25:18 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:25:18.831114 | orchestrator | 2025-10-09 10:25:18 | INFO  | Task 56f1c5c0-209e-4e3c-bd59-e6c55ec39b44 is in state STARTED 2025-10-09 10:25:18.831124 | orchestrator | 2025-10-09 10:25:18 | INFO  | Task 411c41e4-8a77-424f-9de6-6d88c51c21a8 is in state STARTED 2025-10-09 10:25:18.831131 | orchestrator | 2025-10-09 10:25:18 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:22.053084 | orchestrator | 2025-10-09 10:25:22 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:25:22.054288 | orchestrator | 2025-10-09 10:25:22 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:25:22.056965 | orchestrator | 2025-10-09 10:25:22 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:25:22.057055 | orchestrator | 2025-10-09 10:25:22 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:25:22.057074 | orchestrator | 2025-10-09 10:25:22 | INFO  | Task 56f1c5c0-209e-4e3c-bd59-e6c55ec39b44 is in state STARTED 2025-10-09 10:25:22.057647 | orchestrator | 2025-10-09 10:25:22 | INFO  | Task 411c41e4-8a77-424f-9de6-6d88c51c21a8 is in state STARTED 2025-10-09 10:25:22.057671 | orchestrator | 2025-10-09 10:25:22 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:25.104857 | orchestrator | 2025-10-09 10:25:25 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:25:25.106233 | orchestrator | 2025-10-09 10:25:25 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:25:25.106734 | orchestrator | 2025-10-09 10:25:25 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:25:25.107677 | orchestrator | 2025-10-09 10:25:25 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:25:25.108380 | orchestrator | 2025-10-09 10:25:25 | INFO  | Task 56f1c5c0-209e-4e3c-bd59-e6c55ec39b44 is in state SUCCESS 2025-10-09 10:25:25.109170 | orchestrator | 2025-10-09 10:25:25 | INFO  | Task 411c41e4-8a77-424f-9de6-6d88c51c21a8 is in state STARTED 2025-10-09 10:25:25.109194 | orchestrator | 2025-10-09 10:25:25 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:28.150301 | orchestrator | 2025-10-09 10:25:28 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:25:28.150400 | orchestrator | 2025-10-09 10:25:28 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:25:28.150428 | orchestrator | 2025-10-09 10:25:28 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:25:28.150870 | orchestrator | 2025-10-09 10:25:28 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:25:28.151477 | orchestrator | 2025-10-09 10:25:28 | INFO  | Task 411c41e4-8a77-424f-9de6-6d88c51c21a8 is in state STARTED 2025-10-09 10:25:28.151497 | orchestrator | 2025-10-09 10:25:28 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:31.184544 | orchestrator | 2025-10-09 10:25:31 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:25:31.187984 | orchestrator | 2025-10-09 10:25:31 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:25:31.190201 | orchestrator | 2025-10-09 10:25:31 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:25:31.190898 | orchestrator | 2025-10-09 10:25:31 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:25:31.193680 | orchestrator | 2025-10-09 10:25:31 | INFO  | Task 411c41e4-8a77-424f-9de6-6d88c51c21a8 is in state SUCCESS 2025-10-09 10:25:31.193710 | orchestrator | 2025-10-09 10:25:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:34.240631 | orchestrator | 2025-10-09 10:25:34 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:25:34.241014 | orchestrator | 2025-10-09 10:25:34 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:25:34.242509 | orchestrator | 2025-10-09 10:25:34 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:25:34.243516 | orchestrator | 2025-10-09 10:25:34 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:25:34.243539 | orchestrator | 2025-10-09 10:25:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:37.278627 | orchestrator | 2025-10-09 10:25:37 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:25:37.280721 | orchestrator | 2025-10-09 10:25:37 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:25:37.283075 | orchestrator | 2025-10-09 10:25:37 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:25:37.287825 | orchestrator | 2025-10-09 10:25:37 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:25:37.287911 | orchestrator | 2025-10-09 10:25:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:40.332346 | orchestrator | 2025-10-09 10:25:40 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:25:40.332774 | orchestrator | 2025-10-09 10:25:40 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:25:40.333499 | orchestrator | 2025-10-09 10:25:40 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:25:40.333855 | orchestrator | 2025-10-09 10:25:40 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:25:40.334078 | orchestrator | 2025-10-09 10:25:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:43.372494 | orchestrator | 2025-10-09 10:25:43 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:25:43.376596 | orchestrator | 2025-10-09 10:25:43 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:25:43.380008 | orchestrator | 2025-10-09 10:25:43 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:25:43.383078 | orchestrator | 2025-10-09 10:25:43 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:25:43.383097 | orchestrator | 2025-10-09 10:25:43 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:46.426111 | orchestrator | 2025-10-09 10:25:46 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:25:46.426699 | orchestrator | 2025-10-09 10:25:46 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:25:46.427987 | orchestrator | 2025-10-09 10:25:46 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:25:46.429489 | orchestrator | 2025-10-09 10:25:46 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:25:46.429510 | orchestrator | 2025-10-09 10:25:46 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:49.476544 | orchestrator | 2025-10-09 10:25:49 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:25:49.479840 | orchestrator | 2025-10-09 10:25:49 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:25:49.480729 | orchestrator | 2025-10-09 10:25:49 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:25:49.482872 | orchestrator | 2025-10-09 10:25:49 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:25:49.482897 | orchestrator | 2025-10-09 10:25:49 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:52.526522 | orchestrator | 2025-10-09 10:25:52 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:25:52.526623 | orchestrator | 2025-10-09 10:25:52 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:25:52.528070 | orchestrator | 2025-10-09 10:25:52 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:25:52.528988 | orchestrator | 2025-10-09 10:25:52 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:25:52.529011 | orchestrator | 2025-10-09 10:25:52 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:55.565847 | orchestrator | 2025-10-09 10:25:55 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:25:55.568046 | orchestrator | 2025-10-09 10:25:55 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:25:55.569432 | orchestrator | 2025-10-09 10:25:55 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:25:55.571190 | orchestrator | 2025-10-09 10:25:55 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:25:55.571227 | orchestrator | 2025-10-09 10:25:55 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:25:58.606440 | orchestrator | 2025-10-09 10:25:58 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:25:58.610352 | orchestrator | 2025-10-09 10:25:58 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:25:58.612032 | orchestrator | 2025-10-09 10:25:58 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:25:58.615350 | orchestrator | 2025-10-09 10:25:58 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:25:58.615375 | orchestrator | 2025-10-09 10:25:58 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:01.654181 | orchestrator | 2025-10-09 10:26:01 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:26:01.656031 | orchestrator | 2025-10-09 10:26:01 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:26:01.660285 | orchestrator | 2025-10-09 10:26:01 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:26:01.661417 | orchestrator | 2025-10-09 10:26:01 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:26:01.661449 | orchestrator | 2025-10-09 10:26:01 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:04.718309 | orchestrator | 2025-10-09 10:26:04 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:26:04.720438 | orchestrator | 2025-10-09 10:26:04 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:26:04.721612 | orchestrator | 2025-10-09 10:26:04 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:26:04.722662 | orchestrator | 2025-10-09 10:26:04 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:26:04.722687 | orchestrator | 2025-10-09 10:26:04 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:07.756448 | orchestrator | 2025-10-09 10:26:07 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:26:07.759607 | orchestrator | 2025-10-09 10:26:07 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:26:07.760165 | orchestrator | 2025-10-09 10:26:07 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:26:07.760897 | orchestrator | 2025-10-09 10:26:07 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:26:07.760921 | orchestrator | 2025-10-09 10:26:07 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:10.796739 | orchestrator | 2025-10-09 10:26:10 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:26:10.796852 | orchestrator | 2025-10-09 10:26:10 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:26:10.797508 | orchestrator | 2025-10-09 10:26:10 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:26:10.798397 | orchestrator | 2025-10-09 10:26:10 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:26:10.798421 | orchestrator | 2025-10-09 10:26:10 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:13.836625 | orchestrator | 2025-10-09 10:26:13 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:26:13.838644 | orchestrator | 2025-10-09 10:26:13 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:26:13.839123 | orchestrator | 2025-10-09 10:26:13 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:26:13.840556 | orchestrator | 2025-10-09 10:26:13 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:26:13.840591 | orchestrator | 2025-10-09 10:26:13 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:16.881783 | orchestrator | 2025-10-09 10:26:16 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:26:16.881885 | orchestrator | 2025-10-09 10:26:16 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:26:16.882538 | orchestrator | 2025-10-09 10:26:16 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:26:16.883422 | orchestrator | 2025-10-09 10:26:16 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:26:16.883445 | orchestrator | 2025-10-09 10:26:16 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:19.937906 | orchestrator | 2025-10-09 10:26:19 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:26:19.938834 | orchestrator | 2025-10-09 10:26:19 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:26:19.940008 | orchestrator | 2025-10-09 10:26:19 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:26:19.941716 | orchestrator | 2025-10-09 10:26:19 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:26:19.941792 | orchestrator | 2025-10-09 10:26:19 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:22.984372 | orchestrator | 2025-10-09 10:26:22 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:26:22.984700 | orchestrator | 2025-10-09 10:26:22 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:26:22.986999 | orchestrator | 2025-10-09 10:26:22 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:26:22.988048 | orchestrator | 2025-10-09 10:26:22 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:26:22.988285 | orchestrator | 2025-10-09 10:26:22 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:26.031934 | orchestrator | 2025-10-09 10:26:26 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:26:26.034879 | orchestrator | 2025-10-09 10:26:26 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:26:26.036801 | orchestrator | 2025-10-09 10:26:26 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:26:26.039284 | orchestrator | 2025-10-09 10:26:26 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:26:26.039616 | orchestrator | 2025-10-09 10:26:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:29.080637 | orchestrator | 2025-10-09 10:26:29 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:26:29.081101 | orchestrator | 2025-10-09 10:26:29 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:26:29.082629 | orchestrator | 2025-10-09 10:26:29 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:26:29.084632 | orchestrator | 2025-10-09 10:26:29 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:26:29.084658 | orchestrator | 2025-10-09 10:26:29 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:32.130883 | orchestrator | 2025-10-09 10:26:32 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:26:32.133394 | orchestrator | 2025-10-09 10:26:32 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:26:32.137678 | orchestrator | 2025-10-09 10:26:32 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:26:32.139710 | orchestrator | 2025-10-09 10:26:32 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:26:32.139733 | orchestrator | 2025-10-09 10:26:32 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:35.196631 | orchestrator | 2025-10-09 10:26:35 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:26:35.198184 | orchestrator | 2025-10-09 10:26:35 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:26:35.200133 | orchestrator | 2025-10-09 10:26:35 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:26:35.201859 | orchestrator | 2025-10-09 10:26:35 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:26:35.202258 | orchestrator | 2025-10-09 10:26:35 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:38.243828 | orchestrator | 2025-10-09 10:26:38 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:26:38.245896 | orchestrator | 2025-10-09 10:26:38 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:26:38.249443 | orchestrator | 2025-10-09 10:26:38 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:26:38.252646 | orchestrator | 2025-10-09 10:26:38 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:26:38.252888 | orchestrator | 2025-10-09 10:26:38 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:41.294468 | orchestrator | 2025-10-09 10:26:41 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:26:41.296421 | orchestrator | 2025-10-09 10:26:41 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state STARTED 2025-10-09 10:26:41.299090 | orchestrator | 2025-10-09 10:26:41 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:26:41.304027 | orchestrator | 2025-10-09 10:26:41 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:26:41.304050 | orchestrator | 2025-10-09 10:26:41 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:44.341569 | orchestrator | 2025-10-09 10:26:44 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:26:44.344429 | orchestrator | 2025-10-09 10:26:44.344470 | orchestrator | 2025-10-09 10:26:44.344483 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-10-09 10:26:44.344495 | orchestrator | 2025-10-09 10:26:44.344507 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-10-09 10:26:44.344519 | orchestrator | Thursday 09 October 2025 10:25:22 +0000 (0:00:00.176) 0:00:00.176 ****** 2025-10-09 10:26:44.344531 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-10-09 10:26:44.344543 | orchestrator | 2025-10-09 10:26:44.344554 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-10-09 10:26:44.344565 | orchestrator | Thursday 09 October 2025 10:25:22 +0000 (0:00:00.781) 0:00:00.958 ****** 2025-10-09 10:26:44.344576 | orchestrator | changed: [testbed-manager] 2025-10-09 10:26:44.344587 | orchestrator | 2025-10-09 10:26:44.344598 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-10-09 10:26:44.344608 | orchestrator | Thursday 09 October 2025 10:25:23 +0000 (0:00:01.057) 0:00:02.016 ****** 2025-10-09 10:26:44.344619 | orchestrator | changed: [testbed-manager] 2025-10-09 10:26:44.344630 | orchestrator | 2025-10-09 10:26:44.344641 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:26:44.344652 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:26:44.344664 | orchestrator | 2025-10-09 10:26:44.344675 | orchestrator | 2025-10-09 10:26:44.344686 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:26:44.344697 | orchestrator | Thursday 09 October 2025 10:25:24 +0000 (0:00:00.497) 0:00:02.513 ****** 2025-10-09 10:26:44.344707 | orchestrator | =============================================================================== 2025-10-09 10:26:44.344718 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.06s 2025-10-09 10:26:44.344728 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.78s 2025-10-09 10:26:44.344739 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.50s 2025-10-09 10:26:44.344750 | orchestrator | 2025-10-09 10:26:44.344761 | orchestrator | 2025-10-09 10:26:44.344771 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-10-09 10:26:44.344782 | orchestrator | 2025-10-09 10:26:44.344793 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-10-09 10:26:44.344804 | orchestrator | Thursday 09 October 2025 10:25:21 +0000 (0:00:00.331) 0:00:00.331 ****** 2025-10-09 10:26:44.344814 | orchestrator | ok: [testbed-manager] 2025-10-09 10:26:44.344826 | orchestrator | 2025-10-09 10:26:44.344837 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-10-09 10:26:44.344847 | orchestrator | Thursday 09 October 2025 10:25:22 +0000 (0:00:00.596) 0:00:00.928 ****** 2025-10-09 10:26:44.344884 | orchestrator | ok: [testbed-manager] 2025-10-09 10:26:44.344895 | orchestrator | 2025-10-09 10:26:44.345000 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-10-09 10:26:44.345012 | orchestrator | Thursday 09 October 2025 10:25:22 +0000 (0:00:00.551) 0:00:01.479 ****** 2025-10-09 10:26:44.345024 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-10-09 10:26:44.345034 | orchestrator | 2025-10-09 10:26:44.345045 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-10-09 10:26:44.345056 | orchestrator | Thursday 09 October 2025 10:25:23 +0000 (0:00:00.770) 0:00:02.250 ****** 2025-10-09 10:26:44.345067 | orchestrator | changed: [testbed-manager] 2025-10-09 10:26:44.345078 | orchestrator | 2025-10-09 10:26:44.345088 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-10-09 10:26:44.345099 | orchestrator | Thursday 09 October 2025 10:25:25 +0000 (0:00:01.447) 0:00:03.697 ****** 2025-10-09 10:26:44.345110 | orchestrator | changed: [testbed-manager] 2025-10-09 10:26:44.345120 | orchestrator | 2025-10-09 10:26:44.345131 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-10-09 10:26:44.345142 | orchestrator | Thursday 09 October 2025 10:25:25 +0000 (0:00:00.419) 0:00:04.117 ****** 2025-10-09 10:26:44.345153 | orchestrator | changed: [testbed-manager -> localhost] 2025-10-09 10:26:44.345163 | orchestrator | 2025-10-09 10:26:44.345174 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-10-09 10:26:44.345295 | orchestrator | Thursday 09 October 2025 10:25:27 +0000 (0:00:01.612) 0:00:05.730 ****** 2025-10-09 10:26:44.345309 | orchestrator | changed: [testbed-manager -> localhost] 2025-10-09 10:26:44.345321 | orchestrator | 2025-10-09 10:26:44.345331 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-10-09 10:26:44.345342 | orchestrator | Thursday 09 October 2025 10:25:28 +0000 (0:00:00.881) 0:00:06.611 ****** 2025-10-09 10:26:44.345354 | orchestrator | ok: [testbed-manager] 2025-10-09 10:26:44.345364 | orchestrator | 2025-10-09 10:26:44.345375 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-10-09 10:26:44.345386 | orchestrator | Thursday 09 October 2025 10:25:28 +0000 (0:00:00.449) 0:00:07.060 ****** 2025-10-09 10:26:44.345397 | orchestrator | ok: [testbed-manager] 2025-10-09 10:26:44.345408 | orchestrator | 2025-10-09 10:26:44.345419 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:26:44.345430 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:26:44.345441 | orchestrator | 2025-10-09 10:26:44.345452 | orchestrator | 2025-10-09 10:26:44.345463 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:26:44.345474 | orchestrator | Thursday 09 October 2025 10:25:28 +0000 (0:00:00.329) 0:00:07.390 ****** 2025-10-09 10:26:44.345485 | orchestrator | =============================================================================== 2025-10-09 10:26:44.345495 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.61s 2025-10-09 10:26:44.345506 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.45s 2025-10-09 10:26:44.345517 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.88s 2025-10-09 10:26:44.345542 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.77s 2025-10-09 10:26:44.345553 | orchestrator | Get home directory of operator user ------------------------------------- 0.60s 2025-10-09 10:26:44.345564 | orchestrator | Create .kube directory -------------------------------------------------- 0.55s 2025-10-09 10:26:44.345575 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.45s 2025-10-09 10:26:44.345585 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.42s 2025-10-09 10:26:44.345596 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.33s 2025-10-09 10:26:44.345607 | orchestrator | 2025-10-09 10:26:44.345628 | orchestrator | 2025-10-09 10:26:44.345639 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-10-09 10:26:44.345649 | orchestrator | 2025-10-09 10:26:44.345660 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-10-09 10:26:44.345671 | orchestrator | Thursday 09 October 2025 10:24:14 +0000 (0:00:00.275) 0:00:00.275 ****** 2025-10-09 10:26:44.345682 | orchestrator | ok: [localhost] => { 2025-10-09 10:26:44.345695 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-10-09 10:26:44.345706 | orchestrator | } 2025-10-09 10:26:44.345718 | orchestrator | 2025-10-09 10:26:44.345728 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-10-09 10:26:44.345739 | orchestrator | Thursday 09 October 2025 10:24:14 +0000 (0:00:00.125) 0:00:00.401 ****** 2025-10-09 10:26:44.345833 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-10-09 10:26:44.345858 | orchestrator | ...ignoring 2025-10-09 10:26:44.345869 | orchestrator | 2025-10-09 10:26:44.345881 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-10-09 10:26:44.345894 | orchestrator | Thursday 09 October 2025 10:24:19 +0000 (0:00:05.116) 0:00:05.517 ****** 2025-10-09 10:26:44.345906 | orchestrator | skipping: [localhost] 2025-10-09 10:26:44.345918 | orchestrator | 2025-10-09 10:26:44.345931 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-10-09 10:26:44.345943 | orchestrator | Thursday 09 October 2025 10:24:19 +0000 (0:00:00.191) 0:00:05.709 ****** 2025-10-09 10:26:44.345955 | orchestrator | ok: [localhost] 2025-10-09 10:26:44.345967 | orchestrator | 2025-10-09 10:26:44.345979 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:26:44.345992 | orchestrator | 2025-10-09 10:26:44.346004 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:26:44.346095 | orchestrator | Thursday 09 October 2025 10:24:20 +0000 (0:00:00.333) 0:00:06.042 ****** 2025-10-09 10:26:44.346112 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:26:44.346125 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:26:44.346138 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:26:44.346150 | orchestrator | 2025-10-09 10:26:44.346162 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:26:44.346174 | orchestrator | Thursday 09 October 2025 10:24:20 +0000 (0:00:00.488) 0:00:06.531 ****** 2025-10-09 10:26:44.346186 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-10-09 10:26:44.346199 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-10-09 10:26:44.346211 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-10-09 10:26:44.346223 | orchestrator | 2025-10-09 10:26:44.346272 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-10-09 10:26:44.346284 | orchestrator | 2025-10-09 10:26:44.346295 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-10-09 10:26:44.346306 | orchestrator | Thursday 09 October 2025 10:24:21 +0000 (0:00:00.971) 0:00:07.502 ****** 2025-10-09 10:26:44.346317 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:26:44.346328 | orchestrator | 2025-10-09 10:26:44.346339 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-10-09 10:26:44.346350 | orchestrator | Thursday 09 October 2025 10:24:22 +0000 (0:00:00.675) 0:00:08.178 ****** 2025-10-09 10:26:44.346361 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:26:44.346371 | orchestrator | 2025-10-09 10:26:44.346382 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-10-09 10:26:44.346392 | orchestrator | Thursday 09 October 2025 10:24:23 +0000 (0:00:01.024) 0:00:09.202 ****** 2025-10-09 10:26:44.346403 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:26:44.346414 | orchestrator | 2025-10-09 10:26:44.346431 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-10-09 10:26:44.346452 | orchestrator | Thursday 09 October 2025 10:24:23 +0000 (0:00:00.481) 0:00:09.683 ****** 2025-10-09 10:26:44.346463 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:26:44.346474 | orchestrator | 2025-10-09 10:26:44.346484 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-10-09 10:26:44.346495 | orchestrator | Thursday 09 October 2025 10:24:24 +0000 (0:00:00.429) 0:00:10.113 ****** 2025-10-09 10:26:44.346506 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:26:44.346516 | orchestrator | 2025-10-09 10:26:44.346527 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-10-09 10:26:44.346538 | orchestrator | Thursday 09 October 2025 10:24:24 +0000 (0:00:00.463) 0:00:10.576 ****** 2025-10-09 10:26:44.346548 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:26:44.346559 | orchestrator | 2025-10-09 10:26:44.346570 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-10-09 10:26:44.346580 | orchestrator | Thursday 09 October 2025 10:24:25 +0000 (0:00:00.881) 0:00:11.458 ****** 2025-10-09 10:26:44.346591 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:26:44.346602 | orchestrator | 2025-10-09 10:26:44.346613 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-10-09 10:26:44.346635 | orchestrator | Thursday 09 October 2025 10:24:26 +0000 (0:00:00.745) 0:00:12.203 ****** 2025-10-09 10:26:44.346646 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:26:44.346657 | orchestrator | 2025-10-09 10:26:44.346668 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-10-09 10:26:44.346678 | orchestrator | Thursday 09 October 2025 10:24:27 +0000 (0:00:01.761) 0:00:13.965 ****** 2025-10-09 10:26:44.346689 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:26:44.346700 | orchestrator | 2025-10-09 10:26:44.346710 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-10-09 10:26:44.346721 | orchestrator | Thursday 09 October 2025 10:24:28 +0000 (0:00:00.643) 0:00:14.608 ****** 2025-10-09 10:26:44.346732 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:26:44.346742 | orchestrator | 2025-10-09 10:26:44.346753 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-10-09 10:26:44.346764 | orchestrator | Thursday 09 October 2025 10:24:29 +0000 (0:00:00.577) 0:00:15.186 ****** 2025-10-09 10:26:44.346780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:26:44.346797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:26:44.346823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:26:44.346836 | orchestrator | 2025-10-09 10:26:44.346847 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-10-09 10:26:44.346859 | orchestrator | Thursday 09 October 2025 10:24:31 +0000 (0:00:02.408) 0:00:17.594 ****** 2025-10-09 10:26:44.346878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:26:44.346890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:26:44.346903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:26:44.346922 | orchestrator | 2025-10-09 10:26:44.346933 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-10-09 10:26:44.346949 | orchestrator | Thursday 09 October 2025 10:24:35 +0000 (0:00:04.331) 0:00:21.926 ****** 2025-10-09 10:26:44.346960 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-10-09 10:26:44.346971 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-10-09 10:26:44.346982 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-10-09 10:26:44.346993 | orchestrator | 2025-10-09 10:26:44.347003 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-10-09 10:26:44.347014 | orchestrator | Thursday 09 October 2025 10:24:37 +0000 (0:00:01.833) 0:00:23.759 ****** 2025-10-09 10:26:44.347025 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-10-09 10:26:44.347036 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-10-09 10:26:44.347047 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-10-09 10:26:44.347058 | orchestrator | 2025-10-09 10:26:44.347069 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-10-09 10:26:44.347086 | orchestrator | Thursday 09 October 2025 10:24:42 +0000 (0:00:04.397) 0:00:28.156 ****** 2025-10-09 10:26:44.347097 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-10-09 10:26:44.347108 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-10-09 10:26:44.347119 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-10-09 10:26:44.347130 | orchestrator | 2025-10-09 10:26:44.347140 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-10-09 10:26:44.347151 | orchestrator | Thursday 09 October 2025 10:24:44 +0000 (0:00:01.993) 0:00:30.150 ****** 2025-10-09 10:26:44.347162 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-10-09 10:26:44.347173 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-10-09 10:26:44.347184 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-10-09 10:26:44.347195 | orchestrator | 2025-10-09 10:26:44.347206 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-10-09 10:26:44.347217 | orchestrator | Thursday 09 October 2025 10:24:47 +0000 (0:00:03.212) 0:00:33.362 ****** 2025-10-09 10:26:44.347228 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-10-09 10:26:44.347289 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-10-09 10:26:44.347308 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-10-09 10:26:44.347319 | orchestrator | 2025-10-09 10:26:44.347330 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-10-09 10:26:44.347341 | orchestrator | Thursday 09 October 2025 10:24:49 +0000 (0:00:02.594) 0:00:35.956 ****** 2025-10-09 10:26:44.347352 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-10-09 10:26:44.347363 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-10-09 10:26:44.347374 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-10-09 10:26:44.347384 | orchestrator | 2025-10-09 10:26:44.347395 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-10-09 10:26:44.347406 | orchestrator | Thursday 09 October 2025 10:24:51 +0000 (0:00:01.985) 0:00:37.942 ****** 2025-10-09 10:26:44.347417 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:26:44.347428 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:26:44.347439 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:26:44.347450 | orchestrator | 2025-10-09 10:26:44.347460 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-10-09 10:26:44.347471 | orchestrator | Thursday 09 October 2025 10:24:52 +0000 (0:00:00.548) 0:00:38.490 ****** 2025-10-09 10:26:44.347489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:26:44.347509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:26:44.347522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:26:44.347541 | orchestrator | 2025-10-09 10:26:44.347552 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-10-09 10:26:44.347563 | orchestrator | Thursday 09 October 2025 10:24:54 +0000 (0:00:01.903) 0:00:40.393 ****** 2025-10-09 10:26:44.347574 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:26:44.347585 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:26:44.347596 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:26:44.347607 | orchestrator | 2025-10-09 10:26:44.347618 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-10-09 10:26:44.347629 | orchestrator | Thursday 09 October 2025 10:24:55 +0000 (0:00:01.083) 0:00:41.476 ****** 2025-10-09 10:26:44.347639 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:26:44.347650 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:26:44.347661 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:26:44.347671 | orchestrator | 2025-10-09 10:26:44.347682 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-10-09 10:26:44.347693 | orchestrator | Thursday 09 October 2025 10:25:05 +0000 (0:00:10.488) 0:00:51.965 ****** 2025-10-09 10:26:44.347703 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:26:44.347714 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:26:44.347725 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:26:44.347735 | orchestrator | 2025-10-09 10:26:44.347746 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-10-09 10:26:44.347757 | orchestrator | 2025-10-09 10:26:44.347767 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-10-09 10:26:44.347778 | orchestrator | Thursday 09 October 2025 10:25:07 +0000 (0:00:01.166) 0:00:53.132 ****** 2025-10-09 10:26:44.347789 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:26:44.347799 | orchestrator | 2025-10-09 10:26:44.347809 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-10-09 10:26:44.347818 | orchestrator | Thursday 09 October 2025 10:25:08 +0000 (0:00:00.863) 0:00:53.995 ****** 2025-10-09 10:26:44.347828 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:26:44.347837 | orchestrator | 2025-10-09 10:26:44.347847 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-10-09 10:26:44.347856 | orchestrator | Thursday 09 October 2025 10:25:08 +0000 (0:00:00.288) 0:00:54.284 ****** 2025-10-09 10:26:44.347866 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:26:44.347875 | orchestrator | 2025-10-09 10:26:44.347885 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-10-09 10:26:44.347894 | orchestrator | Thursday 09 October 2025 10:25:10 +0000 (0:00:02.006) 0:00:56.290 ****** 2025-10-09 10:26:44.347904 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:26:44.347913 | orchestrator | 2025-10-09 10:26:44.347923 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-10-09 10:26:44.347932 | orchestrator | 2025-10-09 10:26:44.347942 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-10-09 10:26:44.347952 | orchestrator | Thursday 09 October 2025 10:26:03 +0000 (0:00:53.206) 0:01:49.496 ****** 2025-10-09 10:26:44.347965 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:26:44.347975 | orchestrator | 2025-10-09 10:26:44.347985 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-10-09 10:26:44.347994 | orchestrator | Thursday 09 October 2025 10:26:04 +0000 (0:00:00.595) 0:01:50.092 ****** 2025-10-09 10:26:44.348004 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:26:44.348019 | orchestrator | 2025-10-09 10:26:44.348029 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-10-09 10:26:44.348038 | orchestrator | Thursday 09 October 2025 10:26:04 +0000 (0:00:00.256) 0:01:50.348 ****** 2025-10-09 10:26:44.348048 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:26:44.348057 | orchestrator | 2025-10-09 10:26:44.348067 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-10-09 10:26:44.348076 | orchestrator | Thursday 09 October 2025 10:26:06 +0000 (0:00:01.662) 0:01:52.011 ****** 2025-10-09 10:26:44.348086 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:26:44.348095 | orchestrator | 2025-10-09 10:26:44.348105 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-10-09 10:26:44.348114 | orchestrator | 2025-10-09 10:26:44.348124 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-10-09 10:26:44.348133 | orchestrator | Thursday 09 October 2025 10:26:23 +0000 (0:00:17.320) 0:02:09.332 ****** 2025-10-09 10:26:44.348143 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:26:44.348152 | orchestrator | 2025-10-09 10:26:44.348166 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-10-09 10:26:44.348176 | orchestrator | Thursday 09 October 2025 10:26:24 +0000 (0:00:00.696) 0:02:10.028 ****** 2025-10-09 10:26:44.348186 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:26:44.348195 | orchestrator | 2025-10-09 10:26:44.348205 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-10-09 10:26:44.348214 | orchestrator | Thursday 09 October 2025 10:26:24 +0000 (0:00:00.278) 0:02:10.307 ****** 2025-10-09 10:26:44.348224 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:26:44.348248 | orchestrator | 2025-10-09 10:26:44.348259 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-10-09 10:26:44.348269 | orchestrator | Thursday 09 October 2025 10:26:31 +0000 (0:00:07.105) 0:02:17.412 ****** 2025-10-09 10:26:44.348278 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:26:44.348288 | orchestrator | 2025-10-09 10:26:44.348298 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-10-09 10:26:44.348307 | orchestrator | 2025-10-09 10:26:44.348317 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-10-09 10:26:44.348327 | orchestrator | Thursday 09 October 2025 10:26:40 +0000 (0:00:08.773) 0:02:26.186 ****** 2025-10-09 10:26:44.348336 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:26:44.348346 | orchestrator | 2025-10-09 10:26:44.348356 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-10-09 10:26:44.348365 | orchestrator | Thursday 09 October 2025 10:26:40 +0000 (0:00:00.535) 0:02:26.722 ****** 2025-10-09 10:26:44.348375 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-10-09 10:26:44.348385 | orchestrator | enable_outward_rabbitmq_True 2025-10-09 10:26:44.348394 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-10-09 10:26:44.348404 | orchestrator | outward_rabbitmq_restart 2025-10-09 10:26:44.348414 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:26:44.348423 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:26:44.348433 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:26:44.348443 | orchestrator | 2025-10-09 10:26:44.348453 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-10-09 10:26:44.348462 | orchestrator | skipping: no hosts matched 2025-10-09 10:26:44.348472 | orchestrator | 2025-10-09 10:26:44.348482 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-10-09 10:26:44.348491 | orchestrator | skipping: no hosts matched 2025-10-09 10:26:44.348501 | orchestrator | 2025-10-09 10:26:44.348510 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-10-09 10:26:44.348520 | orchestrator | skipping: no hosts matched 2025-10-09 10:26:44.348529 | orchestrator | 2025-10-09 10:26:44.348539 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:26:44.348558 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-10-09 10:26:44.348568 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-10-09 10:26:44.348578 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:26:44.348588 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:26:44.348597 | orchestrator | 2025-10-09 10:26:44.348607 | orchestrator | 2025-10-09 10:26:44.348617 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:26:44.348626 | orchestrator | Thursday 09 October 2025 10:26:43 +0000 (0:00:02.800) 0:02:29.522 ****** 2025-10-09 10:26:44.348636 | orchestrator | =============================================================================== 2025-10-09 10:26:44.348646 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 79.30s 2025-10-09 10:26:44.348655 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.77s 2025-10-09 10:26:44.348665 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------ 10.49s 2025-10-09 10:26:44.348674 | orchestrator | Check RabbitMQ service -------------------------------------------------- 5.12s 2025-10-09 10:26:44.348684 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 4.40s 2025-10-09 10:26:44.348711 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 4.33s 2025-10-09 10:26:44.348721 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 3.21s 2025-10-09 10:26:44.348730 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.80s 2025-10-09 10:26:44.348740 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.59s 2025-10-09 10:26:44.348749 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 2.41s 2025-10-09 10:26:44.348759 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.16s 2025-10-09 10:26:44.348769 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.99s 2025-10-09 10:26:44.348779 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.99s 2025-10-09 10:26:44.348788 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.90s 2025-10-09 10:26:44.348798 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.83s 2025-10-09 10:26:44.348807 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.76s 2025-10-09 10:26:44.348817 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 1.17s 2025-10-09 10:26:44.348832 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.08s 2025-10-09 10:26:44.348842 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.02s 2025-10-09 10:26:44.348851 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.97s 2025-10-09 10:26:44.348861 | orchestrator | 2025-10-09 10:26:44 | INFO  | Task e69f7acb-5b18-4a62-a68c-1cf34b7cb95c is in state SUCCESS 2025-10-09 10:26:44.348871 | orchestrator | 2025-10-09 10:26:44 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:26:44.348881 | orchestrator | 2025-10-09 10:26:44 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:26:44.348891 | orchestrator | 2025-10-09 10:26:44 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:47.382956 | orchestrator | 2025-10-09 10:26:47 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:26:47.384439 | orchestrator | 2025-10-09 10:26:47 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:26:47.385722 | orchestrator | 2025-10-09 10:26:47 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:26:47.385900 | orchestrator | 2025-10-09 10:26:47 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:50.429408 | orchestrator | 2025-10-09 10:26:50 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:26:50.431108 | orchestrator | 2025-10-09 10:26:50 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:26:50.433152 | orchestrator | 2025-10-09 10:26:50 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:26:50.433684 | orchestrator | 2025-10-09 10:26:50 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:53.473489 | orchestrator | 2025-10-09 10:26:53 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:26:53.473575 | orchestrator | 2025-10-09 10:26:53 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:26:53.473589 | orchestrator | 2025-10-09 10:26:53 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:26:53.473601 | orchestrator | 2025-10-09 10:26:53 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:56.512943 | orchestrator | 2025-10-09 10:26:56 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:26:56.515015 | orchestrator | 2025-10-09 10:26:56 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:26:56.515047 | orchestrator | 2025-10-09 10:26:56 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:26:56.515059 | orchestrator | 2025-10-09 10:26:56 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:26:59.579435 | orchestrator | 2025-10-09 10:26:59 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:26:59.579534 | orchestrator | 2025-10-09 10:26:59 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:26:59.579551 | orchestrator | 2025-10-09 10:26:59 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:26:59.579564 | orchestrator | 2025-10-09 10:26:59 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:02.586209 | orchestrator | 2025-10-09 10:27:02 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:27:02.586707 | orchestrator | 2025-10-09 10:27:02 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:27:02.588400 | orchestrator | 2025-10-09 10:27:02 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:27:02.588426 | orchestrator | 2025-10-09 10:27:02 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:05.633774 | orchestrator | 2025-10-09 10:27:05 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:27:05.635639 | orchestrator | 2025-10-09 10:27:05 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:27:05.638453 | orchestrator | 2025-10-09 10:27:05 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:27:05.638472 | orchestrator | 2025-10-09 10:27:05 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:08.689970 | orchestrator | 2025-10-09 10:27:08 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:27:08.693045 | orchestrator | 2025-10-09 10:27:08 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:27:08.693201 | orchestrator | 2025-10-09 10:27:08 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:27:08.693383 | orchestrator | 2025-10-09 10:27:08 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:11.746404 | orchestrator | 2025-10-09 10:27:11 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:27:11.754620 | orchestrator | 2025-10-09 10:27:11 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:27:11.756417 | orchestrator | 2025-10-09 10:27:11 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:27:11.756732 | orchestrator | 2025-10-09 10:27:11 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:14.802616 | orchestrator | 2025-10-09 10:27:14 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:27:14.803276 | orchestrator | 2025-10-09 10:27:14 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:27:14.804381 | orchestrator | 2025-10-09 10:27:14 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:27:14.804413 | orchestrator | 2025-10-09 10:27:14 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:17.838834 | orchestrator | 2025-10-09 10:27:17 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:27:17.839946 | orchestrator | 2025-10-09 10:27:17 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:27:17.841448 | orchestrator | 2025-10-09 10:27:17 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:27:17.841471 | orchestrator | 2025-10-09 10:27:17 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:20.895117 | orchestrator | 2025-10-09 10:27:20 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:27:20.895284 | orchestrator | 2025-10-09 10:27:20 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:27:20.896018 | orchestrator | 2025-10-09 10:27:20 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:27:20.896045 | orchestrator | 2025-10-09 10:27:20 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:23.940781 | orchestrator | 2025-10-09 10:27:23 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:27:23.942871 | orchestrator | 2025-10-09 10:27:23 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:27:23.945054 | orchestrator | 2025-10-09 10:27:23 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:27:23.945073 | orchestrator | 2025-10-09 10:27:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:26.981368 | orchestrator | 2025-10-09 10:27:26 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state STARTED 2025-10-09 10:27:26.983488 | orchestrator | 2025-10-09 10:27:26 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:27:26.985372 | orchestrator | 2025-10-09 10:27:26 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:27:26.985602 | orchestrator | 2025-10-09 10:27:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:30.037378 | orchestrator | 2025-10-09 10:27:30 | INFO  | Task f50549ea-570b-4136-b742-034ed6cceac9 is in state SUCCESS 2025-10-09 10:27:30.040054 | orchestrator | 2025-10-09 10:27:30.040102 | orchestrator | 2025-10-09 10:27:30.040116 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:27:30.040137 | orchestrator | 2025-10-09 10:27:30.040149 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:27:30.040181 | orchestrator | Thursday 09 October 2025 10:25:14 +0000 (0:00:00.295) 0:00:00.295 ****** 2025-10-09 10:27:30.040192 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:27:30.040204 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:27:30.040215 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:27:30.040225 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:27:30.040261 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:27:30.040272 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:27:30.040282 | orchestrator | 2025-10-09 10:27:30.040293 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:27:30.040304 | orchestrator | Thursday 09 October 2025 10:25:15 +0000 (0:00:00.856) 0:00:01.152 ****** 2025-10-09 10:27:30.040316 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-10-09 10:27:30.040328 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-10-09 10:27:30.040339 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-10-09 10:27:30.040350 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-10-09 10:27:30.040360 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-10-09 10:27:30.040371 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-10-09 10:27:30.040381 | orchestrator | 2025-10-09 10:27:30.040392 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-10-09 10:27:30.040403 | orchestrator | 2025-10-09 10:27:30.040413 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-10-09 10:27:30.040424 | orchestrator | Thursday 09 October 2025 10:25:16 +0000 (0:00:01.185) 0:00:02.337 ****** 2025-10-09 10:27:30.040436 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:27:30.040448 | orchestrator | 2025-10-09 10:27:30.040459 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-10-09 10:27:30.040470 | orchestrator | Thursday 09 October 2025 10:25:18 +0000 (0:00:01.129) 0:00:03.466 ****** 2025-10-09 10:27:30.040483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040520 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040549 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040561 | orchestrator | 2025-10-09 10:27:30.040585 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-10-09 10:27:30.040601 | orchestrator | Thursday 09 October 2025 10:25:20 +0000 (0:00:02.043) 0:00:05.509 ****** 2025-10-09 10:27:30.040613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040652 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040665 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040678 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040691 | orchestrator | 2025-10-09 10:27:30.040704 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-10-09 10:27:30.040717 | orchestrator | Thursday 09 October 2025 10:25:23 +0000 (0:00:03.398) 0:00:08.908 ****** 2025-10-09 10:27:30.040729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040787 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040800 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040813 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040826 | orchestrator | 2025-10-09 10:27:30.040839 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-10-09 10:27:30.040852 | orchestrator | Thursday 09 October 2025 10:25:25 +0000 (0:00:01.891) 0:00:10.800 ****** 2025-10-09 10:27:30.040865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040910 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040923 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040936 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.040949 | orchestrator | 2025-10-09 10:27:30.040966 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-10-09 10:27:30.040984 | orchestrator | Thursday 09 October 2025 10:25:26 +0000 (0:00:01.500) 0:00:12.300 ****** 2025-10-09 10:27:30.040995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.041007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.041018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.041029 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.041040 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.041058 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.041069 | orchestrator | 2025-10-09 10:27:30.041080 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-10-09 10:27:30.041091 | orchestrator | Thursday 09 October 2025 10:25:28 +0000 (0:00:01.590) 0:00:13.891 ****** 2025-10-09 10:27:30.041102 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:27:30.041113 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:27:30.041123 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:27:30.041134 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:27:30.041144 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:27:30.041155 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:27:30.041166 | orchestrator | 2025-10-09 10:27:30.041176 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-10-09 10:27:30.041187 | orchestrator | Thursday 09 October 2025 10:25:31 +0000 (0:00:02.610) 0:00:16.501 ****** 2025-10-09 10:27:30.041197 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-10-09 10:27:30.041209 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-10-09 10:27:30.041219 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-10-09 10:27:30.041230 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-10-09 10:27:30.041292 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-10-09 10:27:30.041312 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-10-09 10:27:30.041329 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-10-09 10:27:30.041346 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-10-09 10:27:30.041519 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-10-09 10:27:30.041543 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-10-09 10:27:30.041554 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-10-09 10:27:30.041565 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-10-09 10:27:30.041576 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-10-09 10:27:30.041589 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-10-09 10:27:30.041600 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-10-09 10:27:30.041611 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-10-09 10:27:30.041623 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-10-09 10:27:30.041633 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-10-09 10:27:30.041645 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-10-09 10:27:30.041657 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-10-09 10:27:30.041677 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-10-09 10:27:30.041688 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-10-09 10:27:30.041699 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-10-09 10:27:30.041709 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-10-09 10:27:30.041720 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-10-09 10:27:30.041731 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-10-09 10:27:30.041742 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-10-09 10:27:30.041753 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-10-09 10:27:30.041764 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-10-09 10:27:30.041774 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-10-09 10:27:30.041785 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-10-09 10:27:30.041796 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-10-09 10:27:30.041807 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-10-09 10:27:30.041818 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-10-09 10:27:30.041829 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-10-09 10:27:30.041840 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-10-09 10:27:30.041851 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-10-09 10:27:30.041861 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-10-09 10:27:30.041872 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-10-09 10:27:30.041883 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-10-09 10:27:30.041894 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-10-09 10:27:30.041905 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-10-09 10:27:30.041916 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-10-09 10:27:30.041927 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-10-09 10:27:30.041944 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-10-09 10:27:30.041961 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-10-09 10:27:30.041972 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-10-09 10:27:30.041983 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-10-09 10:27:30.041994 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-10-09 10:27:30.042011 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-10-09 10:27:30.042121 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-10-09 10:27:30.042133 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-10-09 10:27:30.042144 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-10-09 10:27:30.042157 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-10-09 10:27:30.042170 | orchestrator | 2025-10-09 10:27:30.042183 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-10-09 10:27:30.042196 | orchestrator | Thursday 09 October 2025 10:25:50 +0000 (0:00:19.491) 0:00:35.992 ****** 2025-10-09 10:27:30.042209 | orchestrator | 2025-10-09 10:27:30.042222 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-10-09 10:27:30.042289 | orchestrator | Thursday 09 October 2025 10:25:51 +0000 (0:00:00.489) 0:00:36.482 ****** 2025-10-09 10:27:30.042303 | orchestrator | 2025-10-09 10:27:30.042316 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-10-09 10:27:30.042328 | orchestrator | Thursday 09 October 2025 10:25:51 +0000 (0:00:00.074) 0:00:36.556 ****** 2025-10-09 10:27:30.042341 | orchestrator | 2025-10-09 10:27:30.042354 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-10-09 10:27:30.042366 | orchestrator | Thursday 09 October 2025 10:25:51 +0000 (0:00:00.068) 0:00:36.624 ****** 2025-10-09 10:27:30.042378 | orchestrator | 2025-10-09 10:27:30.042392 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-10-09 10:27:30.042405 | orchestrator | Thursday 09 October 2025 10:25:51 +0000 (0:00:00.067) 0:00:36.691 ****** 2025-10-09 10:27:30.042417 | orchestrator | 2025-10-09 10:27:30.042430 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-10-09 10:27:30.042443 | orchestrator | Thursday 09 October 2025 10:25:51 +0000 (0:00:00.066) 0:00:36.758 ****** 2025-10-09 10:27:30.042456 | orchestrator | 2025-10-09 10:27:30.042468 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-10-09 10:27:30.042481 | orchestrator | Thursday 09 October 2025 10:25:51 +0000 (0:00:00.066) 0:00:36.825 ****** 2025-10-09 10:27:30.042494 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:27:30.042507 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:27:30.042518 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:27:30.042529 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:27:30.042540 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:27:30.042550 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:27:30.042561 | orchestrator | 2025-10-09 10:27:30.042572 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-10-09 10:27:30.042583 | orchestrator | Thursday 09 October 2025 10:25:53 +0000 (0:00:01.994) 0:00:38.820 ****** 2025-10-09 10:27:30.042594 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:27:30.042605 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:27:30.042616 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:27:30.042627 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:27:30.042637 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:27:30.042648 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:27:30.042659 | orchestrator | 2025-10-09 10:27:30.042670 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-10-09 10:27:30.042681 | orchestrator | 2025-10-09 10:27:30.042692 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-10-09 10:27:30.042702 | orchestrator | Thursday 09 October 2025 10:26:01 +0000 (0:00:08.396) 0:00:47.216 ****** 2025-10-09 10:27:30.042713 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:27:30.042732 | orchestrator | 2025-10-09 10:27:30.042743 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-10-09 10:27:30.042754 | orchestrator | Thursday 09 October 2025 10:26:02 +0000 (0:00:00.797) 0:00:48.013 ****** 2025-10-09 10:27:30.042765 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:27:30.042776 | orchestrator | 2025-10-09 10:27:30.042787 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-10-09 10:27:30.042798 | orchestrator | Thursday 09 October 2025 10:26:03 +0000 (0:00:00.563) 0:00:48.577 ****** 2025-10-09 10:27:30.042809 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:27:30.042820 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:27:30.042829 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:27:30.042839 | orchestrator | 2025-10-09 10:27:30.042849 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-10-09 10:27:30.042859 | orchestrator | Thursday 09 October 2025 10:26:04 +0000 (0:00:00.944) 0:00:49.522 ****** 2025-10-09 10:27:30.042868 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:27:30.042878 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:27:30.042887 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:27:30.042904 | orchestrator | 2025-10-09 10:27:30.042914 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-10-09 10:27:30.042924 | orchestrator | Thursday 09 October 2025 10:26:04 +0000 (0:00:00.341) 0:00:49.863 ****** 2025-10-09 10:27:30.042934 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:27:30.042943 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:27:30.042953 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:27:30.042963 | orchestrator | 2025-10-09 10:27:30.042973 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-10-09 10:27:30.042982 | orchestrator | Thursday 09 October 2025 10:26:04 +0000 (0:00:00.343) 0:00:50.207 ****** 2025-10-09 10:27:30.042992 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:27:30.043001 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:27:30.043011 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:27:30.043021 | orchestrator | 2025-10-09 10:27:30.043030 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-10-09 10:27:30.043040 | orchestrator | Thursday 09 October 2025 10:26:05 +0000 (0:00:00.331) 0:00:50.539 ****** 2025-10-09 10:27:30.043050 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:27:30.043059 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:27:30.043069 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:27:30.043079 | orchestrator | 2025-10-09 10:27:30.043088 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-10-09 10:27:30.043098 | orchestrator | Thursday 09 October 2025 10:26:05 +0000 (0:00:00.584) 0:00:51.123 ****** 2025-10-09 10:27:30.043108 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:27:30.043117 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.043127 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.043137 | orchestrator | 2025-10-09 10:27:30.043146 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-10-09 10:27:30.043156 | orchestrator | Thursday 09 October 2025 10:26:06 +0000 (0:00:00.336) 0:00:51.460 ****** 2025-10-09 10:27:30.043166 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:27:30.043175 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.043185 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.043195 | orchestrator | 2025-10-09 10:27:30.043204 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-10-09 10:27:30.043214 | orchestrator | Thursday 09 October 2025 10:26:06 +0000 (0:00:00.293) 0:00:51.753 ****** 2025-10-09 10:27:30.043224 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:27:30.043250 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.043261 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.043270 | orchestrator | 2025-10-09 10:27:30.043280 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-10-09 10:27:30.043300 | orchestrator | Thursday 09 October 2025 10:26:06 +0000 (0:00:00.299) 0:00:52.052 ****** 2025-10-09 10:27:30.043310 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:27:30.043320 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.043330 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.043339 | orchestrator | 2025-10-09 10:27:30.043349 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-10-09 10:27:30.043358 | orchestrator | Thursday 09 October 2025 10:26:07 +0000 (0:00:00.584) 0:00:52.636 ****** 2025-10-09 10:27:30.043368 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:27:30.043378 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.043387 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.043396 | orchestrator | 2025-10-09 10:27:30.043406 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-10-09 10:27:30.043416 | orchestrator | Thursday 09 October 2025 10:26:07 +0000 (0:00:00.322) 0:00:52.959 ****** 2025-10-09 10:27:30.043425 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:27:30.043435 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.043445 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.043454 | orchestrator | 2025-10-09 10:27:30.043464 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-10-09 10:27:30.043473 | orchestrator | Thursday 09 October 2025 10:26:07 +0000 (0:00:00.325) 0:00:53.285 ****** 2025-10-09 10:27:30.043483 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:27:30.043492 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.043502 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.043512 | orchestrator | 2025-10-09 10:27:30.043521 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-10-09 10:27:30.043531 | orchestrator | Thursday 09 October 2025 10:26:08 +0000 (0:00:00.391) 0:00:53.677 ****** 2025-10-09 10:27:30.043541 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:27:30.043550 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.043560 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.043569 | orchestrator | 2025-10-09 10:27:30.043604 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-10-09 10:27:30.043614 | orchestrator | Thursday 09 October 2025 10:26:08 +0000 (0:00:00.512) 0:00:54.189 ****** 2025-10-09 10:27:30.043624 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:27:30.043633 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.043643 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.043653 | orchestrator | 2025-10-09 10:27:30.043662 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-10-09 10:27:30.043672 | orchestrator | Thursday 09 October 2025 10:26:09 +0000 (0:00:00.374) 0:00:54.564 ****** 2025-10-09 10:27:30.043682 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:27:30.043691 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.043701 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.043710 | orchestrator | 2025-10-09 10:27:30.043720 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-10-09 10:27:30.043730 | orchestrator | Thursday 09 October 2025 10:26:09 +0000 (0:00:00.331) 0:00:54.895 ****** 2025-10-09 10:27:30.043740 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:27:30.043749 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.043759 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.043768 | orchestrator | 2025-10-09 10:27:30.043778 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-10-09 10:27:30.043788 | orchestrator | Thursday 09 October 2025 10:26:09 +0000 (0:00:00.400) 0:00:55.296 ****** 2025-10-09 10:27:30.043798 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:27:30.043807 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.043823 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.043833 | orchestrator | 2025-10-09 10:27:30.043847 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-10-09 10:27:30.043863 | orchestrator | Thursday 09 October 2025 10:26:10 +0000 (0:00:00.344) 0:00:55.640 ****** 2025-10-09 10:27:30.043873 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:27:30.043883 | orchestrator | 2025-10-09 10:27:30.043893 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-10-09 10:27:30.043903 | orchestrator | Thursday 09 October 2025 10:26:11 +0000 (0:00:00.950) 0:00:56.591 ****** 2025-10-09 10:27:30.043912 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:27:30.043922 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:27:30.043932 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:27:30.043941 | orchestrator | 2025-10-09 10:27:30.043951 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-10-09 10:27:30.043961 | orchestrator | Thursday 09 October 2025 10:26:11 +0000 (0:00:00.486) 0:00:57.077 ****** 2025-10-09 10:27:30.043971 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:27:30.043980 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:27:30.043990 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:27:30.043999 | orchestrator | 2025-10-09 10:27:30.044009 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-10-09 10:27:30.044019 | orchestrator | Thursday 09 October 2025 10:26:12 +0000 (0:00:00.473) 0:00:57.551 ****** 2025-10-09 10:27:30.044029 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:27:30.044038 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.044048 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.044057 | orchestrator | 2025-10-09 10:27:30.044067 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-10-09 10:27:30.044077 | orchestrator | Thursday 09 October 2025 10:26:12 +0000 (0:00:00.624) 0:00:58.175 ****** 2025-10-09 10:27:30.044086 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:27:30.044096 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.044105 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.044115 | orchestrator | 2025-10-09 10:27:30.044125 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-10-09 10:27:30.044134 | orchestrator | Thursday 09 October 2025 10:26:13 +0000 (0:00:00.418) 0:00:58.593 ****** 2025-10-09 10:27:30.044144 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:27:30.044154 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.044163 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.044173 | orchestrator | 2025-10-09 10:27:30.044183 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-10-09 10:27:30.044192 | orchestrator | Thursday 09 October 2025 10:26:13 +0000 (0:00:00.403) 0:00:58.996 ****** 2025-10-09 10:27:30.044202 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:27:30.044212 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.044221 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.044246 | orchestrator | 2025-10-09 10:27:30.044256 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-10-09 10:27:30.044266 | orchestrator | Thursday 09 October 2025 10:26:13 +0000 (0:00:00.409) 0:00:59.406 ****** 2025-10-09 10:27:30.044276 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:27:30.044286 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.044295 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.044305 | orchestrator | 2025-10-09 10:27:30.044315 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-10-09 10:27:30.044325 | orchestrator | Thursday 09 October 2025 10:26:14 +0000 (0:00:00.650) 0:01:00.057 ****** 2025-10-09 10:27:30.044334 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:27:30.044344 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.044353 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.044363 | orchestrator | 2025-10-09 10:27:30.044373 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-10-09 10:27:30.044382 | orchestrator | Thursday 09 October 2025 10:26:14 +0000 (0:00:00.353) 0:01:00.410 ****** 2025-10-09 10:27:30.044399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044507 | orchestrator | 2025-10-09 10:27:30.044517 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-10-09 10:27:30.044527 | orchestrator | Thursday 09 October 2025 10:26:16 +0000 (0:00:01.490) 0:01:01.901 ****** 2025-10-09 10:27:30.044537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044646 | orchestrator | 2025-10-09 10:27:30.044656 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-10-09 10:27:30.044666 | orchestrator | Thursday 09 October 2025 10:26:20 +0000 (0:00:04.260) 0:01:06.162 ****** 2025-10-09 10:27:30.044676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.044784 | orchestrator | 2025-10-09 10:27:30.044794 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-10-09 10:27:30.044804 | orchestrator | Thursday 09 October 2025 10:26:23 +0000 (0:00:02.584) 0:01:08.746 ****** 2025-10-09 10:27:30.044814 | orchestrator | 2025-10-09 10:27:30.044824 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-10-09 10:27:30.044842 | orchestrator | Thursday 09 October 2025 10:26:23 +0000 (0:00:00.067) 0:01:08.814 ****** 2025-10-09 10:27:30.044851 | orchestrator | 2025-10-09 10:27:30.044861 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-10-09 10:27:30.044871 | orchestrator | Thursday 09 October 2025 10:26:23 +0000 (0:00:00.067) 0:01:08.882 ****** 2025-10-09 10:27:30.044880 | orchestrator | 2025-10-09 10:27:30.044890 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-10-09 10:27:30.044899 | orchestrator | Thursday 09 October 2025 10:26:23 +0000 (0:00:00.064) 0:01:08.946 ****** 2025-10-09 10:27:30.044909 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:27:30.044919 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:27:30.044928 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:27:30.044938 | orchestrator | 2025-10-09 10:27:30.044948 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-10-09 10:27:30.044957 | orchestrator | Thursday 09 October 2025 10:26:31 +0000 (0:00:07.925) 0:01:16.872 ****** 2025-10-09 10:27:30.044967 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:27:30.044976 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:27:30.044986 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:27:30.044996 | orchestrator | 2025-10-09 10:27:30.045005 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-10-09 10:27:30.045015 | orchestrator | Thursday 09 October 2025 10:26:38 +0000 (0:00:07.266) 0:01:24.138 ****** 2025-10-09 10:27:30.045024 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:27:30.045034 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:27:30.045044 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:27:30.045053 | orchestrator | 2025-10-09 10:27:30.045063 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-10-09 10:27:30.045073 | orchestrator | Thursday 09 October 2025 10:26:45 +0000 (0:00:06.447) 0:01:30.586 ****** 2025-10-09 10:27:30.045082 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:27:30.045092 | orchestrator | 2025-10-09 10:27:30.045102 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-10-09 10:27:30.045111 | orchestrator | Thursday 09 October 2025 10:26:45 +0000 (0:00:00.440) 0:01:31.027 ****** 2025-10-09 10:27:30.045121 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:27:30.045131 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:27:30.045140 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:27:30.045150 | orchestrator | 2025-10-09 10:27:30.045159 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-10-09 10:27:30.045169 | orchestrator | Thursday 09 October 2025 10:26:46 +0000 (0:00:01.139) 0:01:32.166 ****** 2025-10-09 10:27:30.045178 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.045188 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.045198 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:27:30.045207 | orchestrator | 2025-10-09 10:27:30.045217 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-10-09 10:27:30.045226 | orchestrator | Thursday 09 October 2025 10:26:47 +0000 (0:00:00.779) 0:01:32.945 ****** 2025-10-09 10:27:30.045250 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:27:30.045260 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:27:30.045269 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:27:30.045279 | orchestrator | 2025-10-09 10:27:30.045289 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-10-09 10:27:30.045298 | orchestrator | Thursday 09 October 2025 10:26:48 +0000 (0:00:00.998) 0:01:33.944 ****** 2025-10-09 10:27:30.045308 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.045317 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.045327 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:27:30.045337 | orchestrator | 2025-10-09 10:27:30.045346 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-10-09 10:27:30.045356 | orchestrator | Thursday 09 October 2025 10:26:49 +0000 (0:00:00.604) 0:01:34.548 ****** 2025-10-09 10:27:30.045366 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:27:30.045383 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:27:30.045398 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:27:30.045408 | orchestrator | 2025-10-09 10:27:30.045422 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-10-09 10:27:30.045432 | orchestrator | Thursday 09 October 2025 10:26:50 +0000 (0:00:01.525) 0:01:36.073 ****** 2025-10-09 10:27:30.045442 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:27:30.045451 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:27:30.045461 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:27:30.045470 | orchestrator | 2025-10-09 10:27:30.045480 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-10-09 10:27:30.045490 | orchestrator | Thursday 09 October 2025 10:26:51 +0000 (0:00:00.948) 0:01:37.022 ****** 2025-10-09 10:27:30.045500 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:27:30.045509 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:27:30.045519 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:27:30.045528 | orchestrator | 2025-10-09 10:27:30.045538 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-10-09 10:27:30.045548 | orchestrator | Thursday 09 October 2025 10:26:51 +0000 (0:00:00.309) 0:01:37.332 ****** 2025-10-09 10:27:30.045558 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045568 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045578 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045588 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045598 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045608 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045618 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045638 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045658 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045669 | orchestrator | 2025-10-09 10:27:30.045679 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-10-09 10:27:30.045688 | orchestrator | Thursday 09 October 2025 10:26:53 +0000 (0:00:01.454) 0:01:38.787 ****** 2025-10-09 10:27:30.045698 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045708 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045718 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045728 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045758 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045797 | orchestrator | 2025-10-09 10:27:30.045807 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-10-09 10:27:30.045817 | orchestrator | Thursday 09 October 2025 10:26:58 +0000 (0:00:05.163) 0:01:43.951 ****** 2025-10-09 10:27:30.045837 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045848 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045858 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045868 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045898 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:27:30.045934 | orchestrator | 2025-10-09 10:27:30.045943 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-10-09 10:27:30.045953 | orchestrator | Thursday 09 October 2025 10:27:01 +0000 (0:00:03.077) 0:01:47.028 ****** 2025-10-09 10:27:30.045963 | orchestrator | 2025-10-09 10:27:30.045973 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-10-09 10:27:30.045982 | orchestrator | Thursday 09 October 2025 10:27:01 +0000 (0:00:00.077) 0:01:47.106 ****** 2025-10-09 10:27:30.045992 | orchestrator | 2025-10-09 10:27:30.046002 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-10-09 10:27:30.046011 | orchestrator | Thursday 09 October 2025 10:27:01 +0000 (0:00:00.080) 0:01:47.187 ****** 2025-10-09 10:27:30.046051 | orchestrator | 2025-10-09 10:27:30.046062 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-10-09 10:27:30.046072 | orchestrator | Thursday 09 October 2025 10:27:01 +0000 (0:00:00.079) 0:01:47.266 ****** 2025-10-09 10:27:30.046081 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:27:30.046091 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:27:30.046100 | orchestrator | 2025-10-09 10:27:30.046116 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-10-09 10:27:30.046130 | orchestrator | Thursday 09 October 2025 10:27:08 +0000 (0:00:06.248) 0:01:53.514 ****** 2025-10-09 10:27:30.046140 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:27:30.046150 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:27:30.046159 | orchestrator | 2025-10-09 10:27:30.046169 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-10-09 10:27:30.046179 | orchestrator | Thursday 09 October 2025 10:27:14 +0000 (0:00:06.177) 0:01:59.692 ****** 2025-10-09 10:27:30.046188 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:27:30.046198 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:27:30.046208 | orchestrator | 2025-10-09 10:27:30.046217 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-10-09 10:27:30.046227 | orchestrator | Thursday 09 October 2025 10:27:21 +0000 (0:00:06.890) 0:02:06.583 ****** 2025-10-09 10:27:30.046287 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:27:30.046297 | orchestrator | 2025-10-09 10:27:30.046307 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-10-09 10:27:30.046316 | orchestrator | Thursday 09 October 2025 10:27:21 +0000 (0:00:00.210) 0:02:06.794 ****** 2025-10-09 10:27:30.046326 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:27:30.046336 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:27:30.046345 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:27:30.046355 | orchestrator | 2025-10-09 10:27:30.046365 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-10-09 10:27:30.046375 | orchestrator | Thursday 09 October 2025 10:27:22 +0000 (0:00:01.088) 0:02:07.882 ****** 2025-10-09 10:27:30.046384 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.046394 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.046403 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:27:30.046413 | orchestrator | 2025-10-09 10:27:30.046423 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-10-09 10:27:30.046433 | orchestrator | Thursday 09 October 2025 10:27:23 +0000 (0:00:00.839) 0:02:08.722 ****** 2025-10-09 10:27:30.046442 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:27:30.046459 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:27:30.046469 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:27:30.046479 | orchestrator | 2025-10-09 10:27:30.046488 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-10-09 10:27:30.046498 | orchestrator | Thursday 09 October 2025 10:27:24 +0000 (0:00:00.909) 0:02:09.632 ****** 2025-10-09 10:27:30.046508 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:27:30.046517 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:27:30.046527 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:27:30.046536 | orchestrator | 2025-10-09 10:27:30.046546 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-10-09 10:27:30.046556 | orchestrator | Thursday 09 October 2025 10:27:24 +0000 (0:00:00.654) 0:02:10.286 ****** 2025-10-09 10:27:30.046566 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:27:30.046575 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:27:30.046585 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:27:30.046595 | orchestrator | 2025-10-09 10:27:30.046604 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-10-09 10:27:30.046614 | orchestrator | Thursday 09 October 2025 10:27:25 +0000 (0:00:00.794) 0:02:11.080 ****** 2025-10-09 10:27:30.046624 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:27:30.046633 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:27:30.046643 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:27:30.046652 | orchestrator | 2025-10-09 10:27:30.046662 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:27:30.046672 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-10-09 10:27:30.046682 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-10-09 10:27:30.046692 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-10-09 10:27:30.046702 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:27:30.046712 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:27:30.046722 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:27:30.046731 | orchestrator | 2025-10-09 10:27:30.046741 | orchestrator | 2025-10-09 10:27:30.046751 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:27:30.046760 | orchestrator | Thursday 09 October 2025 10:27:26 +0000 (0:00:01.140) 0:02:12.221 ****** 2025-10-09 10:27:30.046770 | orchestrator | =============================================================================== 2025-10-09 10:27:30.046780 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.49s 2025-10-09 10:27:30.046789 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.17s 2025-10-09 10:27:30.046799 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.44s 2025-10-09 10:27:30.046808 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.34s 2025-10-09 10:27:30.046816 | orchestrator | ovn-controller : Restart ovn-controller container ----------------------- 8.40s 2025-10-09 10:27:30.046824 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.16s 2025-10-09 10:27:30.046832 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.26s 2025-10-09 10:27:30.046844 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 3.40s 2025-10-09 10:27:30.046857 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.08s 2025-10-09 10:27:30.046870 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.61s 2025-10-09 10:27:30.046878 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.58s 2025-10-09 10:27:30.046886 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.04s 2025-10-09 10:27:30.046894 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.99s 2025-10-09 10:27:30.046902 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.89s 2025-10-09 10:27:30.046910 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.59s 2025-10-09 10:27:30.046918 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.53s 2025-10-09 10:27:30.046926 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.50s 2025-10-09 10:27:30.046934 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.49s 2025-10-09 10:27:30.046942 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.45s 2025-10-09 10:27:30.046950 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.19s 2025-10-09 10:27:30.046957 | orchestrator | 2025-10-09 10:27:30 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:27:30.046965 | orchestrator | 2025-10-09 10:27:30 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:27:30.046974 | orchestrator | 2025-10-09 10:27:30 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:33.097389 | orchestrator | 2025-10-09 10:27:33 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:27:33.099167 | orchestrator | 2025-10-09 10:27:33 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:27:33.099198 | orchestrator | 2025-10-09 10:27:33 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:36.137561 | orchestrator | 2025-10-09 10:27:36 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:27:36.138523 | orchestrator | 2025-10-09 10:27:36 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:27:36.138838 | orchestrator | 2025-10-09 10:27:36 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:39.204490 | orchestrator | 2025-10-09 10:27:39 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:27:39.204589 | orchestrator | 2025-10-09 10:27:39 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:27:39.204604 | orchestrator | 2025-10-09 10:27:39 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:42.255761 | orchestrator | 2025-10-09 10:27:42 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:27:42.257892 | orchestrator | 2025-10-09 10:27:42 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:27:42.257924 | orchestrator | 2025-10-09 10:27:42 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:45.304434 | orchestrator | 2025-10-09 10:27:45 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:27:45.304851 | orchestrator | 2025-10-09 10:27:45 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:27:45.304873 | orchestrator | 2025-10-09 10:27:45 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:48.355824 | orchestrator | 2025-10-09 10:27:48 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:27:48.358522 | orchestrator | 2025-10-09 10:27:48 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:27:48.358643 | orchestrator | 2025-10-09 10:27:48 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:51.408974 | orchestrator | 2025-10-09 10:27:51 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:27:51.409561 | orchestrator | 2025-10-09 10:27:51 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:27:51.409863 | orchestrator | 2025-10-09 10:27:51 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:54.464085 | orchestrator | 2025-10-09 10:27:54 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:27:54.465824 | orchestrator | 2025-10-09 10:27:54 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:27:54.465856 | orchestrator | 2025-10-09 10:27:54 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:27:57.513775 | orchestrator | 2025-10-09 10:27:57 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:27:57.516079 | orchestrator | 2025-10-09 10:27:57 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:27:57.516119 | orchestrator | 2025-10-09 10:27:57 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:00.550825 | orchestrator | 2025-10-09 10:28:00 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:28:00.550915 | orchestrator | 2025-10-09 10:28:00 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:28:00.550927 | orchestrator | 2025-10-09 10:28:00 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:03.585121 | orchestrator | 2025-10-09 10:28:03 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:28:03.586462 | orchestrator | 2025-10-09 10:28:03 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:28:03.586494 | orchestrator | 2025-10-09 10:28:03 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:06.622795 | orchestrator | 2025-10-09 10:28:06 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:28:06.624435 | orchestrator | 2025-10-09 10:28:06 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:28:06.624473 | orchestrator | 2025-10-09 10:28:06 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:09.679897 | orchestrator | 2025-10-09 10:28:09 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:28:09.682525 | orchestrator | 2025-10-09 10:28:09 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:28:09.683026 | orchestrator | 2025-10-09 10:28:09 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:12.738771 | orchestrator | 2025-10-09 10:28:12 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:28:12.739386 | orchestrator | 2025-10-09 10:28:12 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:28:12.739421 | orchestrator | 2025-10-09 10:28:12 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:15.788464 | orchestrator | 2025-10-09 10:28:15 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:28:15.790143 | orchestrator | 2025-10-09 10:28:15 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:28:15.790174 | orchestrator | 2025-10-09 10:28:15 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:18.839456 | orchestrator | 2025-10-09 10:28:18 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:28:18.839555 | orchestrator | 2025-10-09 10:28:18 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:28:18.839600 | orchestrator | 2025-10-09 10:28:18 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:21.877050 | orchestrator | 2025-10-09 10:28:21 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:28:21.878836 | orchestrator | 2025-10-09 10:28:21 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:28:21.878995 | orchestrator | 2025-10-09 10:28:21 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:24.914395 | orchestrator | 2025-10-09 10:28:24 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:28:24.914689 | orchestrator | 2025-10-09 10:28:24 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:28:24.914725 | orchestrator | 2025-10-09 10:28:24 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:27.960698 | orchestrator | 2025-10-09 10:28:27 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:28:27.961482 | orchestrator | 2025-10-09 10:28:27 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:28:27.961514 | orchestrator | 2025-10-09 10:28:27 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:30.997688 | orchestrator | 2025-10-09 10:28:30 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:28:30.998176 | orchestrator | 2025-10-09 10:28:30 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:28:30.998428 | orchestrator | 2025-10-09 10:28:30 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:34.034487 | orchestrator | 2025-10-09 10:28:34 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:28:34.035417 | orchestrator | 2025-10-09 10:28:34 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:28:34.035474 | orchestrator | 2025-10-09 10:28:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:37.084460 | orchestrator | 2025-10-09 10:28:37 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:28:37.085864 | orchestrator | 2025-10-09 10:28:37 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:28:37.086287 | orchestrator | 2025-10-09 10:28:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:40.130816 | orchestrator | 2025-10-09 10:28:40 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:28:40.131832 | orchestrator | 2025-10-09 10:28:40 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:28:40.131863 | orchestrator | 2025-10-09 10:28:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:43.173916 | orchestrator | 2025-10-09 10:28:43 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:28:43.174000 | orchestrator | 2025-10-09 10:28:43 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:28:43.174065 | orchestrator | 2025-10-09 10:28:43 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:46.228738 | orchestrator | 2025-10-09 10:28:46 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:28:46.230558 | orchestrator | 2025-10-09 10:28:46 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:28:46.230602 | orchestrator | 2025-10-09 10:28:46 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:49.290561 | orchestrator | 2025-10-09 10:28:49 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:28:49.292789 | orchestrator | 2025-10-09 10:28:49 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:28:49.293113 | orchestrator | 2025-10-09 10:28:49 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:52.345808 | orchestrator | 2025-10-09 10:28:52 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:28:52.346998 | orchestrator | 2025-10-09 10:28:52 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:28:52.347028 | orchestrator | 2025-10-09 10:28:52 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:55.401357 | orchestrator | 2025-10-09 10:28:55 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:28:55.402986 | orchestrator | 2025-10-09 10:28:55 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:28:55.403018 | orchestrator | 2025-10-09 10:28:55 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:28:58.451158 | orchestrator | 2025-10-09 10:28:58 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:28:58.451291 | orchestrator | 2025-10-09 10:28:58 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:28:58.451306 | orchestrator | 2025-10-09 10:28:58 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:01.491472 | orchestrator | 2025-10-09 10:29:01 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:29:01.493617 | orchestrator | 2025-10-09 10:29:01 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:29:01.493646 | orchestrator | 2025-10-09 10:29:01 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:04.534992 | orchestrator | 2025-10-09 10:29:04 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:29:04.535822 | orchestrator | 2025-10-09 10:29:04 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:29:04.535852 | orchestrator | 2025-10-09 10:29:04 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:07.589991 | orchestrator | 2025-10-09 10:29:07 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:29:07.591528 | orchestrator | 2025-10-09 10:29:07 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:29:07.591832 | orchestrator | 2025-10-09 10:29:07 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:10.643390 | orchestrator | 2025-10-09 10:29:10 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:29:10.643470 | orchestrator | 2025-10-09 10:29:10 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:29:10.643498 | orchestrator | 2025-10-09 10:29:10 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:13.683611 | orchestrator | 2025-10-09 10:29:13 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:29:13.684043 | orchestrator | 2025-10-09 10:29:13 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:29:13.684077 | orchestrator | 2025-10-09 10:29:13 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:16.730917 | orchestrator | 2025-10-09 10:29:16 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:29:16.732358 | orchestrator | 2025-10-09 10:29:16 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:29:16.732401 | orchestrator | 2025-10-09 10:29:16 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:19.781129 | orchestrator | 2025-10-09 10:29:19 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:29:19.783803 | orchestrator | 2025-10-09 10:29:19 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:29:19.783835 | orchestrator | 2025-10-09 10:29:19 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:22.835304 | orchestrator | 2025-10-09 10:29:22 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:29:22.838552 | orchestrator | 2025-10-09 10:29:22 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:29:22.838587 | orchestrator | 2025-10-09 10:29:22 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:25.877928 | orchestrator | 2025-10-09 10:29:25 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:29:25.879553 | orchestrator | 2025-10-09 10:29:25 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:29:25.879932 | orchestrator | 2025-10-09 10:29:25 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:28.919600 | orchestrator | 2025-10-09 10:29:28 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:29:28.920280 | orchestrator | 2025-10-09 10:29:28 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:29:28.920313 | orchestrator | 2025-10-09 10:29:28 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:31.959941 | orchestrator | 2025-10-09 10:29:31 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:29:31.960131 | orchestrator | 2025-10-09 10:29:31 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:29:31.960375 | orchestrator | 2025-10-09 10:29:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:35.008380 | orchestrator | 2025-10-09 10:29:35 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:29:35.009154 | orchestrator | 2025-10-09 10:29:35 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:29:35.009185 | orchestrator | 2025-10-09 10:29:35 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:38.051692 | orchestrator | 2025-10-09 10:29:38 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:29:38.053025 | orchestrator | 2025-10-09 10:29:38 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:29:38.053060 | orchestrator | 2025-10-09 10:29:38 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:41.113117 | orchestrator | 2025-10-09 10:29:41 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:29:41.117015 | orchestrator | 2025-10-09 10:29:41 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:29:41.117435 | orchestrator | 2025-10-09 10:29:41 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:44.174159 | orchestrator | 2025-10-09 10:29:44 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:29:44.174287 | orchestrator | 2025-10-09 10:29:44 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:29:44.174302 | orchestrator | 2025-10-09 10:29:44 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:47.214523 | orchestrator | 2025-10-09 10:29:47 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:29:47.214622 | orchestrator | 2025-10-09 10:29:47 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:29:47.214665 | orchestrator | 2025-10-09 10:29:47 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:50.256635 | orchestrator | 2025-10-09 10:29:50 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:29:50.256719 | orchestrator | 2025-10-09 10:29:50 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:29:50.256731 | orchestrator | 2025-10-09 10:29:50 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:53.303295 | orchestrator | 2025-10-09 10:29:53 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:29:53.306846 | orchestrator | 2025-10-09 10:29:53 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:29:53.307721 | orchestrator | 2025-10-09 10:29:53 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:56.354327 | orchestrator | 2025-10-09 10:29:56 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:29:56.356276 | orchestrator | 2025-10-09 10:29:56 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:29:56.356598 | orchestrator | 2025-10-09 10:29:56 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:29:59.403480 | orchestrator | 2025-10-09 10:29:59 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:29:59.404608 | orchestrator | 2025-10-09 10:29:59 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:29:59.404630 | orchestrator | 2025-10-09 10:29:59 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:02.474337 | orchestrator | 2025-10-09 10:30:02 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:30:02.477522 | orchestrator | 2025-10-09 10:30:02 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:30:02.477550 | orchestrator | 2025-10-09 10:30:02 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:05.517201 | orchestrator | 2025-10-09 10:30:05 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:30:05.517483 | orchestrator | 2025-10-09 10:30:05 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:30:05.517926 | orchestrator | 2025-10-09 10:30:05 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:08.565059 | orchestrator | 2025-10-09 10:30:08 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:30:08.566501 | orchestrator | 2025-10-09 10:30:08 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:30:08.567263 | orchestrator | 2025-10-09 10:30:08 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:11.607317 | orchestrator | 2025-10-09 10:30:11 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:30:11.607846 | orchestrator | 2025-10-09 10:30:11 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:30:11.607880 | orchestrator | 2025-10-09 10:30:11 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:14.658493 | orchestrator | 2025-10-09 10:30:14 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:30:14.658765 | orchestrator | 2025-10-09 10:30:14 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:30:14.659009 | orchestrator | 2025-10-09 10:30:14 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:17.713750 | orchestrator | 2025-10-09 10:30:17 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:30:17.721097 | orchestrator | 2025-10-09 10:30:17 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:30:17.721175 | orchestrator | 2025-10-09 10:30:17 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:20.768026 | orchestrator | 2025-10-09 10:30:20 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:30:20.768202 | orchestrator | 2025-10-09 10:30:20 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:30:20.768483 | orchestrator | 2025-10-09 10:30:20 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:23.805682 | orchestrator | 2025-10-09 10:30:23 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:30:23.807481 | orchestrator | 2025-10-09 10:30:23 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:30:23.807512 | orchestrator | 2025-10-09 10:30:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:26.855720 | orchestrator | 2025-10-09 10:30:26 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:30:26.857947 | orchestrator | 2025-10-09 10:30:26 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:30:26.857976 | orchestrator | 2025-10-09 10:30:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:29.897265 | orchestrator | 2025-10-09 10:30:29 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:30:29.898965 | orchestrator | 2025-10-09 10:30:29 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:30:29.899152 | orchestrator | 2025-10-09 10:30:29 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:32.943778 | orchestrator | 2025-10-09 10:30:32 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:30:32.944888 | orchestrator | 2025-10-09 10:30:32 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:30:32.944916 | orchestrator | 2025-10-09 10:30:32 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:35.992163 | orchestrator | 2025-10-09 10:30:35 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:30:35.992318 | orchestrator | 2025-10-09 10:30:35 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:30:35.992335 | orchestrator | 2025-10-09 10:30:35 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:39.030365 | orchestrator | 2025-10-09 10:30:39 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:30:39.031205 | orchestrator | 2025-10-09 10:30:39 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:30:39.031255 | orchestrator | 2025-10-09 10:30:39 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:42.086514 | orchestrator | 2025-10-09 10:30:42 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:30:42.087005 | orchestrator | 2025-10-09 10:30:42 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:30:42.087035 | orchestrator | 2025-10-09 10:30:42 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:45.142212 | orchestrator | 2025-10-09 10:30:45 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:30:45.142983 | orchestrator | 2025-10-09 10:30:45 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:30:45.143015 | orchestrator | 2025-10-09 10:30:45 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:48.190383 | orchestrator | 2025-10-09 10:30:48 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state STARTED 2025-10-09 10:30:48.190473 | orchestrator | 2025-10-09 10:30:48 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:30:48.190486 | orchestrator | 2025-10-09 10:30:48 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:51.243550 | orchestrator | 2025-10-09 10:30:51.243804 | orchestrator | 2025-10-09 10:30:51.243824 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:30:51.243837 | orchestrator | 2025-10-09 10:30:51.243849 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:30:51.243861 | orchestrator | Thursday 09 October 2025 10:23:49 +0000 (0:00:00.321) 0:00:00.321 ****** 2025-10-09 10:30:51.243872 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:30:51.243962 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:30:51.243975 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:30:51.243986 | orchestrator | 2025-10-09 10:30:51.243997 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:30:51.244008 | orchestrator | Thursday 09 October 2025 10:23:49 +0000 (0:00:00.313) 0:00:00.635 ****** 2025-10-09 10:30:51.244019 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-10-09 10:30:51.244030 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-10-09 10:30:51.244041 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-10-09 10:30:51.244052 | orchestrator | 2025-10-09 10:30:51.244063 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-10-09 10:30:51.244076 | orchestrator | 2025-10-09 10:30:51.244088 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-10-09 10:30:51.244100 | orchestrator | Thursday 09 October 2025 10:23:50 +0000 (0:00:00.827) 0:00:01.463 ****** 2025-10-09 10:30:51.244113 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.244126 | orchestrator | 2025-10-09 10:30:51.244138 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-10-09 10:30:51.244151 | orchestrator | Thursday 09 October 2025 10:23:51 +0000 (0:00:01.176) 0:00:02.639 ****** 2025-10-09 10:30:51.244163 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:30:51.244176 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:30:51.244235 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:30:51.244250 | orchestrator | 2025-10-09 10:30:51.244261 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-10-09 10:30:51.244292 | orchestrator | Thursday 09 October 2025 10:23:52 +0000 (0:00:00.875) 0:00:03.514 ****** 2025-10-09 10:30:51.244340 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.244351 | orchestrator | 2025-10-09 10:30:51.244363 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-10-09 10:30:51.244374 | orchestrator | Thursday 09 October 2025 10:23:54 +0000 (0:00:01.427) 0:00:04.942 ****** 2025-10-09 10:30:51.244416 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:30:51.244428 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:30:51.244439 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:30:51.244548 | orchestrator | 2025-10-09 10:30:51.244560 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-10-09 10:30:51.244571 | orchestrator | Thursday 09 October 2025 10:23:55 +0000 (0:00:00.992) 0:00:05.935 ****** 2025-10-09 10:30:51.244620 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-10-09 10:30:51.244634 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-10-09 10:30:51.244645 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-10-09 10:30:51.244656 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-10-09 10:30:51.244694 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-10-09 10:30:51.244706 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-10-09 10:30:51.244718 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-10-09 10:30:51.244730 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-10-09 10:30:51.244741 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-10-09 10:30:51.244752 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-10-09 10:30:51.244762 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-10-09 10:30:51.244773 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-10-09 10:30:51.244784 | orchestrator | 2025-10-09 10:30:51.244794 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-10-09 10:30:51.244806 | orchestrator | Thursday 09 October 2025 10:23:58 +0000 (0:00:03.412) 0:00:09.347 ****** 2025-10-09 10:30:51.244817 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-10-09 10:30:51.244829 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-10-09 10:30:51.244839 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-10-09 10:30:51.244851 | orchestrator | 2025-10-09 10:30:51.244861 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-10-09 10:30:51.244872 | orchestrator | Thursday 09 October 2025 10:24:00 +0000 (0:00:01.662) 0:00:11.009 ****** 2025-10-09 10:30:51.244883 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-10-09 10:30:51.244895 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-10-09 10:30:51.244905 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-10-09 10:30:51.244916 | orchestrator | 2025-10-09 10:30:51.244927 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-10-09 10:30:51.244938 | orchestrator | Thursday 09 October 2025 10:24:02 +0000 (0:00:01.934) 0:00:12.943 ****** 2025-10-09 10:30:51.244989 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-10-09 10:30:51.245000 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.245033 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-10-09 10:30:51.245045 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.245056 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-10-09 10:30:51.245067 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.245078 | orchestrator | 2025-10-09 10:30:51.245150 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-10-09 10:30:51.245161 | orchestrator | Thursday 09 October 2025 10:24:03 +0000 (0:00:01.408) 0:00:14.352 ****** 2025-10-09 10:30:51.245176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-10-09 10:30:51.245305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-10-09 10:30:51.245329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-10-09 10:30:51.245341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:30:51.245353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:30:51.245373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:30:51.245425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:30:51.245438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:30:51.245454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:30:51.245473 | orchestrator | 2025-10-09 10:30:51.245484 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-10-09 10:30:51.245496 | orchestrator | Thursday 09 October 2025 10:24:06 +0000 (0:00:03.066) 0:00:17.419 ****** 2025-10-09 10:30:51.245507 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.245518 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.245529 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.245539 | orchestrator | 2025-10-09 10:30:51.245550 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-10-09 10:30:51.245561 | orchestrator | Thursday 09 October 2025 10:24:07 +0000 (0:00:01.171) 0:00:18.591 ****** 2025-10-09 10:30:51.245572 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-10-09 10:30:51.245583 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-10-09 10:30:51.245594 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-10-09 10:30:51.245605 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-10-09 10:30:51.245615 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-10-09 10:30:51.245626 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-10-09 10:30:51.245637 | orchestrator | 2025-10-09 10:30:51.245648 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-10-09 10:30:51.245658 | orchestrator | Thursday 09 October 2025 10:24:10 +0000 (0:00:02.331) 0:00:20.923 ****** 2025-10-09 10:30:51.245669 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.245680 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.245691 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.245701 | orchestrator | 2025-10-09 10:30:51.245712 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-10-09 10:30:51.245723 | orchestrator | Thursday 09 October 2025 10:24:13 +0000 (0:00:03.312) 0:00:24.235 ****** 2025-10-09 10:30:51.245734 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:30:51.245745 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:30:51.245756 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:30:51.245766 | orchestrator | 2025-10-09 10:30:51.245861 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-10-09 10:30:51.245872 | orchestrator | Thursday 09 October 2025 10:24:16 +0000 (0:00:02.777) 0:00:27.012 ****** 2025-10-09 10:30:51.245884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.245919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.245932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.245957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d53c56779780f15f3d42ee3c4f0fd8ddbb2a5066', '__omit_place_holder__d53c56779780f15f3d42ee3c4f0fd8ddbb2a5066'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-10-09 10:30:51.245969 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.245987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.245999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.246011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.246099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d53c56779780f15f3d42ee3c4f0fd8ddbb2a5066', '__omit_place_holder__d53c56779780f15f3d42ee3c4f0fd8ddbb2a5066'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-10-09 10:30:51.246111 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.246134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.246155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.246173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.246185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d53c56779780f15f3d42ee3c4f0fd8ddbb2a5066', '__omit_place_holder__d53c56779780f15f3d42ee3c4f0fd8ddbb2a5066'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-10-09 10:30:51.246414 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.246427 | orchestrator | 2025-10-09 10:30:51.246438 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-10-09 10:30:51.246450 | orchestrator | Thursday 09 October 2025 10:24:18 +0000 (0:00:02.312) 0:00:29.325 ****** 2025-10-09 10:30:51.246461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-10-09 10:30:51.246473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-10-09 10:30:51.246496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-10-09 10:30:51.246519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:30:51.246531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.246558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d53c56779780f15f3d42ee3c4f0fd8ddbb2a5066', '__omit_place_holder__d53c56779780f15f3d42ee3c4f0fd8ddbb2a5066'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-10-09 10:30:51.246570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:30:51.246582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.246593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d53c56779780f15f3d42ee3c4f0fd8ddbb2a5066', '__omit_place_holder__d53c56779780f15f3d42ee3c4f0fd8ddbb2a5066'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-10-09 10:30:51.246619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:30:51.246632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.246648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d53c56779780f15f3d42ee3c4f0fd8ddbb2a5066', '__omit_place_holder__d53c56779780f15f3d42ee3c4f0fd8ddbb2a5066'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-10-09 10:30:51.246660 | orchestrator | 2025-10-09 10:30:51.246672 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-10-09 10:30:51.246683 | orchestrator | Thursday 09 October 2025 10:24:22 +0000 (0:00:03.986) 0:00:33.311 ****** 2025-10-09 10:30:51.246694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-10-09 10:30:51.246706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-10-09 10:30:51.246718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-10-09 10:30:51.246743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:30:51.246755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:30:51.246772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:30:51.246784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:30:51.246796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:30:51.246807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:30:51.246818 | orchestrator | 2025-10-09 10:30:51.246830 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-10-09 10:30:51.246847 | orchestrator | Thursday 09 October 2025 10:24:26 +0000 (0:00:03.567) 0:00:36.879 ****** 2025-10-09 10:30:51.246857 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-10-09 10:30:51.246867 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-10-09 10:30:51.246915 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-10-09 10:30:51.246926 | orchestrator | 2025-10-09 10:30:51.246935 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-10-09 10:30:51.246945 | orchestrator | Thursday 09 October 2025 10:24:29 +0000 (0:00:03.514) 0:00:40.394 ****** 2025-10-09 10:30:51.246955 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-10-09 10:30:51.246965 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-10-09 10:30:51.246975 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-10-09 10:30:51.247007 | orchestrator | 2025-10-09 10:30:51.249191 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-10-09 10:30:51.249322 | orchestrator | Thursday 09 October 2025 10:24:36 +0000 (0:00:07.224) 0:00:47.618 ****** 2025-10-09 10:30:51.249341 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.249354 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.249366 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.249377 | orchestrator | 2025-10-09 10:30:51.249389 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-10-09 10:30:51.249400 | orchestrator | Thursday 09 October 2025 10:24:37 +0000 (0:00:00.923) 0:00:48.541 ****** 2025-10-09 10:30:51.249413 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-10-09 10:30:51.249426 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-10-09 10:30:51.249437 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-10-09 10:30:51.249448 | orchestrator | 2025-10-09 10:30:51.249460 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-10-09 10:30:51.249471 | orchestrator | Thursday 09 October 2025 10:24:42 +0000 (0:00:04.995) 0:00:53.537 ****** 2025-10-09 10:30:51.249482 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-10-09 10:30:51.249494 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-10-09 10:30:51.249505 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-10-09 10:30:51.249516 | orchestrator | 2025-10-09 10:30:51.249527 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-10-09 10:30:51.249546 | orchestrator | Thursday 09 October 2025 10:24:47 +0000 (0:00:04.293) 0:00:57.830 ****** 2025-10-09 10:30:51.249559 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-10-09 10:30:51.249571 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-10-09 10:30:51.249582 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-10-09 10:30:51.249593 | orchestrator | 2025-10-09 10:30:51.249604 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-10-09 10:30:51.249615 | orchestrator | Thursday 09 October 2025 10:24:49 +0000 (0:00:02.296) 0:01:00.127 ****** 2025-10-09 10:30:51.249626 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-10-09 10:30:51.249638 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-10-09 10:30:51.249649 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-10-09 10:30:51.249679 | orchestrator | 2025-10-09 10:30:51.249691 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-10-09 10:30:51.249702 | orchestrator | Thursday 09 October 2025 10:24:51 +0000 (0:00:01.841) 0:01:01.969 ****** 2025-10-09 10:30:51.249713 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.249724 | orchestrator | 2025-10-09 10:30:51.249735 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-10-09 10:30:51.249746 | orchestrator | Thursday 09 October 2025 10:24:52 +0000 (0:00:01.274) 0:01:03.243 ****** 2025-10-09 10:30:51.249761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-10-09 10:30:51.249777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-10-09 10:30:51.249808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-10-09 10:30:51.249821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:30:51.249840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:30:51.249851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:30:51.249871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:30:51.249884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:30:51.249896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:30:51.249907 | orchestrator | 2025-10-09 10:30:51.249918 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-10-09 10:30:51.249929 | orchestrator | Thursday 09 October 2025 10:24:56 +0000 (0:00:03.982) 0:01:07.226 ****** 2025-10-09 10:30:51.249949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.249961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.249982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.250000 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.250012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.250112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.250125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.250136 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.250148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.250168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.250181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.250199 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.250211 | orchestrator | 2025-10-09 10:30:51.250246 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-10-09 10:30:51.250257 | orchestrator | Thursday 09 October 2025 10:24:59 +0000 (0:00:02.974) 0:01:10.201 ****** 2025-10-09 10:30:51.250275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.250287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.250299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.250311 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.250322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.250343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.250355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.250373 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.250389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.250402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.250413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.250425 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.250436 | orchestrator | 2025-10-09 10:30:51.250447 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-10-09 10:30:51.250458 | orchestrator | Thursday 09 October 2025 10:25:00 +0000 (0:00:01.048) 0:01:11.249 ****** 2025-10-09 10:30:51.250470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.250490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.250502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.250535 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.250547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.250564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.250576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.250587 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.250598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.250610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.250628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.250640 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.250651 | orchestrator | 2025-10-09 10:30:51.250662 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-10-09 10:30:51.250679 | orchestrator | Thursday 09 October 2025 10:25:01 +0000 (0:00:01.189) 0:01:12.439 ****** 2025-10-09 10:30:51.250691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.250707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.250719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.250731 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.250742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.250754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.250765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.250777 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.250794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.250812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.250829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.250840 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.250852 | orchestrator | 2025-10-09 10:30:51.250863 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-10-09 10:30:51.250874 | orchestrator | Thursday 09 October 2025 10:25:02 +0000 (0:00:00.889) 0:01:13.329 ****** 2025-10-09 10:30:51.250886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.250898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.250909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.250921 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.250938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.250955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.250967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.250979 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.250995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.251007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.251019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.251030 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.251041 | orchestrator | 2025-10-09 10:30:51.251052 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-10-09 10:30:51.251063 | orchestrator | Thursday 09 October 2025 10:25:03 +0000 (0:00:01.261) 0:01:14.591 ****** 2025-10-09 10:30:51.251075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.251099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.251112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.251123 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.251144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.251156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.251168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.251179 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.251191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.251215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.251258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.251270 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.251281 | orchestrator | 2025-10-09 10:30:51.251292 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-10-09 10:30:51.251303 | orchestrator | Thursday 09 October 2025 10:25:06 +0000 (0:00:02.791) 0:01:17.382 ****** 2025-10-09 10:30:51.251314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.251332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.251344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.251355 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.251366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.251384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.251410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.251422 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.251433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.251445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.251457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.251468 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.251479 | orchestrator | 2025-10-09 10:30:51.251490 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-10-09 10:30:51.251501 | orchestrator | Thursday 09 October 2025 10:25:08 +0000 (0:00:01.561) 0:01:18.944 ****** 2025-10-09 10:30:51.251513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.251531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.251543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.251586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.251598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.251610 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.251626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.251638 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.251650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-10-09 10:30:51.251667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-10-09 10:30:51.251679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-10-09 10:30:51.251691 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.251702 | orchestrator | 2025-10-09 10:30:51.251713 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-10-09 10:30:51.251724 | orchestrator | Thursday 09 October 2025 10:25:10 +0000 (0:00:01.814) 0:01:20.758 ****** 2025-10-09 10:30:51.251735 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-10-09 10:30:51.251747 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-10-09 10:30:51.251763 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-10-09 10:30:51.251775 | orchestrator | 2025-10-09 10:30:51.251786 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-10-09 10:30:51.251797 | orchestrator | Thursday 09 October 2025 10:25:12 +0000 (0:00:02.524) 0:01:23.282 ****** 2025-10-09 10:30:51.251808 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-10-09 10:30:51.251819 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-10-09 10:30:51.251830 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-10-09 10:30:51.251841 | orchestrator | 2025-10-09 10:30:51.251852 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-10-09 10:30:51.251863 | orchestrator | Thursday 09 October 2025 10:25:14 +0000 (0:00:01.864) 0:01:25.147 ****** 2025-10-09 10:30:51.251873 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-10-09 10:30:51.251884 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-10-09 10:30:51.251895 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-10-09 10:30:51.251906 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-10-09 10:30:51.251917 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.251928 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-10-09 10:30:51.251939 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-10-09 10:30:51.251950 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.251961 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.251977 | orchestrator | 2025-10-09 10:30:51.251993 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-10-09 10:30:51.252004 | orchestrator | Thursday 09 October 2025 10:25:15 +0000 (0:00:01.182) 0:01:26.329 ****** 2025-10-09 10:30:51.252016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-10-09 10:30:51.252028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-10-09 10:30:51.252039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-10-09 10:30:51.252066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:30:51.252078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:30:51.252089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-10-09 10:30:51.252112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:30:51.252124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:30:51.252136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-10-09 10:30:51.252147 | orchestrator | 2025-10-09 10:30:51.252158 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-10-09 10:30:51.252169 | orchestrator | Thursday 09 October 2025 10:25:19 +0000 (0:00:03.933) 0:01:30.263 ****** 2025-10-09 10:30:51.252180 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.252191 | orchestrator | 2025-10-09 10:30:51.252202 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-10-09 10:30:51.252213 | orchestrator | Thursday 09 October 2025 10:25:20 +0000 (0:00:00.875) 0:01:31.140 ****** 2025-10-09 10:30:51.252244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-10-09 10:30:51.252265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-10-09 10:30:51.252277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.252299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.252312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-10-09 10:30:51.252323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-10-09 10:30:51.252335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.252362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.252375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-10-09 10:30:51.252404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-10-09 10:30:51.252416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.252428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.252439 | orchestrator | 2025-10-09 10:30:51.252450 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-10-09 10:30:51.252461 | orchestrator | Thursday 09 October 2025 10:25:26 +0000 (0:00:05.768) 0:01:36.909 ****** 2025-10-09 10:30:51.252473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-10-09 10:30:51.252491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-10-09 10:30:51.252503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.252521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.252562 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.252580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-10-09 10:30:51.252591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-10-09 10:30:51.252603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.252614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.252625 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.252645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-10-09 10:30:51.252664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-10-09 10:30:51.252681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.252693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.252704 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.252715 | orchestrator | 2025-10-09 10:30:51.252726 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-10-09 10:30:51.252737 | orchestrator | Thursday 09 October 2025 10:25:27 +0000 (0:00:01.374) 0:01:38.284 ****** 2025-10-09 10:30:51.252749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-10-09 10:30:51.252761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-10-09 10:30:51.252772 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.252784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-10-09 10:30:51.252795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-10-09 10:30:51.252806 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.252817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-10-09 10:30:51.252835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-10-09 10:30:51.252847 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.252858 | orchestrator | 2025-10-09 10:30:51.252875 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-10-09 10:30:51.252886 | orchestrator | Thursday 09 October 2025 10:25:29 +0000 (0:00:01.658) 0:01:39.942 ****** 2025-10-09 10:30:51.252897 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.252908 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.252919 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.252930 | orchestrator | 2025-10-09 10:30:51.252941 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-10-09 10:30:51.252951 | orchestrator | Thursday 09 October 2025 10:25:30 +0000 (0:00:01.317) 0:01:41.260 ****** 2025-10-09 10:30:51.252962 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.252973 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.252984 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.252995 | orchestrator | 2025-10-09 10:30:51.253006 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-10-09 10:30:51.253016 | orchestrator | Thursday 09 October 2025 10:25:33 +0000 (0:00:02.804) 0:01:44.065 ****** 2025-10-09 10:30:51.253027 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.253038 | orchestrator | 2025-10-09 10:30:51.253049 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-10-09 10:30:51.253060 | orchestrator | Thursday 09 October 2025 10:25:34 +0000 (0:00:00.911) 0:01:44.976 ****** 2025-10-09 10:30:51.253077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:30:51.253089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.253101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.253113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:30:51.253138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.253155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:30:51.253167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.253178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.253190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.253211 | orchestrator | 2025-10-09 10:30:51.253291 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-10-09 10:30:51.253303 | orchestrator | Thursday 09 October 2025 10:25:37 +0000 (0:00:03.607) 0:01:48.584 ****** 2025-10-09 10:30:51.253323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:30:51.253335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.253352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.253364 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.253375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:30:51.253387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.253406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.253417 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.253436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:30:51.253448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.253465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.253476 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.253488 | orchestrator | 2025-10-09 10:30:51.253499 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-10-09 10:30:51.253510 | orchestrator | Thursday 09 October 2025 10:25:38 +0000 (0:00:00.684) 0:01:49.268 ****** 2025-10-09 10:30:51.253521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-10-09 10:30:51.253534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-10-09 10:30:51.253553 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.253564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-10-09 10:30:51.253575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-10-09 10:30:51.253587 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.253596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-10-09 10:30:51.253606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-10-09 10:30:51.253616 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.253626 | orchestrator | 2025-10-09 10:30:51.253636 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-10-09 10:30:51.253646 | orchestrator | Thursday 09 October 2025 10:25:39 +0000 (0:00:01.296) 0:01:50.564 ****** 2025-10-09 10:30:51.253656 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.253665 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.253675 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.253684 | orchestrator | 2025-10-09 10:30:51.253694 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-10-09 10:30:51.253704 | orchestrator | Thursday 09 October 2025 10:25:41 +0000 (0:00:01.378) 0:01:51.942 ****** 2025-10-09 10:30:51.253713 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.253723 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.253732 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.253742 | orchestrator | 2025-10-09 10:30:51.253757 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-10-09 10:30:51.253767 | orchestrator | Thursday 09 October 2025 10:25:43 +0000 (0:00:02.112) 0:01:54.054 ****** 2025-10-09 10:30:51.253777 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.253786 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.253796 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.253805 | orchestrator | 2025-10-09 10:30:51.253815 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-10-09 10:30:51.253825 | orchestrator | Thursday 09 October 2025 10:25:43 +0000 (0:00:00.329) 0:01:54.383 ****** 2025-10-09 10:30:51.253834 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.253844 | orchestrator | 2025-10-09 10:30:51.253854 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-10-09 10:30:51.253864 | orchestrator | Thursday 09 October 2025 10:25:44 +0000 (0:00:00.953) 0:01:55.337 ****** 2025-10-09 10:30:51.253878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-10-09 10:30:51.253899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-10-09 10:30:51.253910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-10-09 10:30:51.253920 | orchestrator | 2025-10-09 10:30:51.253930 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-10-09 10:30:51.253940 | orchestrator | Thursday 09 October 2025 10:25:47 +0000 (0:00:03.053) 0:01:58.390 ****** 2025-10-09 10:30:51.253956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-10-09 10:30:51.253967 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.253977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-10-09 10:30:51.253987 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.254002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-10-09 10:30:51.254062 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.254075 | orchestrator | 2025-10-09 10:30:51.254085 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-10-09 10:30:51.254094 | orchestrator | Thursday 09 October 2025 10:25:49 +0000 (0:00:01.568) 0:01:59.958 ****** 2025-10-09 10:30:51.254106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-10-09 10:30:51.254117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-10-09 10:30:51.254128 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.254138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-10-09 10:30:51.254149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-10-09 10:30:51.254159 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.254175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-10-09 10:30:51.254186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-10-09 10:30:51.254196 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.254206 | orchestrator | 2025-10-09 10:30:51.254215 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-10-09 10:30:51.254249 | orchestrator | Thursday 09 October 2025 10:25:51 +0000 (0:00:02.311) 0:02:02.270 ****** 2025-10-09 10:30:51.254259 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.254268 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.254278 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.254288 | orchestrator | 2025-10-09 10:30:51.254298 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-10-09 10:30:51.254308 | orchestrator | Thursday 09 October 2025 10:25:52 +0000 (0:00:01.210) 0:02:03.480 ****** 2025-10-09 10:30:51.254317 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.254327 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.254337 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.254346 | orchestrator | 2025-10-09 10:30:51.254356 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-10-09 10:30:51.254366 | orchestrator | Thursday 09 October 2025 10:25:54 +0000 (0:00:01.604) 0:02:05.085 ****** 2025-10-09 10:30:51.254375 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.254385 | orchestrator | 2025-10-09 10:30:51.254395 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-10-09 10:30:51.254409 | orchestrator | Thursday 09 October 2025 10:25:55 +0000 (0:00:00.900) 0:02:05.986 ****** 2025-10-09 10:30:51.254420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:30:51.254431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.254442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:30:51.254459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:30:51.254482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.254493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.254504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.254514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.254531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimens2025-10-09 10:30:51 | INFO  | Task 8e426ec1-3c0f-4989-ad16-24d02e834e6d is in state SUCCESS 2025-10-09 10:30:51.254548 | orchestrator | ions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.254559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.254573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.254584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.254594 | orchestrator | 2025-10-09 10:30:51.254603 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-10-09 10:30:51.254613 | orchestrator | Thursday 09 October 2025 10:25:58 +0000 (0:00:03.602) 0:02:09.588 ****** 2025-10-09 10:30:51.254623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:30:51.254640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.254659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.254674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.254684 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.254694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:30:51.254704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.254715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.254738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.254749 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.254764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:30:51.254774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.254784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.254794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.254811 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.254821 | orchestrator | 2025-10-09 10:30:51.254831 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-10-09 10:30:51.254841 | orchestrator | Thursday 09 October 2025 10:25:59 +0000 (0:00:01.027) 0:02:10.615 ****** 2025-10-09 10:30:51.254857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-10-09 10:30:51.254868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-10-09 10:30:51.254878 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.254888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-10-09 10:30:51.254898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-10-09 10:30:51.254908 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.254918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-10-09 10:30:51.254928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-10-09 10:30:51.254937 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.254947 | orchestrator | 2025-10-09 10:30:51.254957 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-10-09 10:30:51.254967 | orchestrator | Thursday 09 October 2025 10:26:00 +0000 (0:00:00.982) 0:02:11.598 ****** 2025-10-09 10:30:51.254976 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.254986 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.255000 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.255010 | orchestrator | 2025-10-09 10:30:51.255020 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-10-09 10:30:51.255030 | orchestrator | Thursday 09 October 2025 10:26:02 +0000 (0:00:01.319) 0:02:12.918 ****** 2025-10-09 10:30:51.255040 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.255049 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.255059 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.255068 | orchestrator | 2025-10-09 10:30:51.255078 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-10-09 10:30:51.255088 | orchestrator | Thursday 09 October 2025 10:26:04 +0000 (0:00:02.087) 0:02:15.006 ****** 2025-10-09 10:30:51.255098 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.255107 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.255117 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.255127 | orchestrator | 2025-10-09 10:30:51.255136 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-10-09 10:30:51.255146 | orchestrator | Thursday 09 October 2025 10:26:04 +0000 (0:00:00.571) 0:02:15.577 ****** 2025-10-09 10:30:51.255156 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.255165 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.255175 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.255203 | orchestrator | 2025-10-09 10:30:51.255213 | orchestrator | TASK [include_role : designate] ************************************************ 2025-10-09 10:30:51.255269 | orchestrator | Thursday 09 October 2025 10:26:05 +0000 (0:00:00.329) 0:02:15.906 ****** 2025-10-09 10:30:51.255280 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.255289 | orchestrator | 2025-10-09 10:30:51.255299 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-10-09 10:30:51.255309 | orchestrator | Thursday 09 October 2025 10:26:06 +0000 (0:00:00.875) 0:02:16.781 ****** 2025-10-09 10:30:51.255319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:30:51.255336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:30:51.255348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:30:51.255426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:30:51.255437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:30:51.255520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:30:51.255533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255580 | orchestrator | 2025-10-09 10:30:51.255588 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-10-09 10:30:51.255596 | orchestrator | Thursday 09 October 2025 10:26:10 +0000 (0:00:04.500) 0:02:21.282 ****** 2025-10-09 10:30:51.255610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:30:51.255619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:30:51.255636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255684 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.255692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:30:51.255708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:30:51.255717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:30:51.255771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:30:51.255788 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.255796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.255848 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.255856 | orchestrator | 2025-10-09 10:30:51.255864 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-10-09 10:30:51.255872 | orchestrator | Thursday 09 October 2025 10:26:11 +0000 (0:00:00.921) 0:02:22.203 ****** 2025-10-09 10:30:51.255884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-10-09 10:30:51.255892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-10-09 10:30:51.255900 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.255908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-10-09 10:30:51.255916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-10-09 10:30:51.255924 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.255932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-10-09 10:30:51.255940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-10-09 10:30:51.255948 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.255956 | orchestrator | 2025-10-09 10:30:51.255964 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-10-09 10:30:51.255972 | orchestrator | Thursday 09 October 2025 10:26:12 +0000 (0:00:01.100) 0:02:23.304 ****** 2025-10-09 10:30:51.255980 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.255987 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.255995 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.256003 | orchestrator | 2025-10-09 10:30:51.256011 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-10-09 10:30:51.256019 | orchestrator | Thursday 09 October 2025 10:26:14 +0000 (0:00:01.971) 0:02:25.276 ****** 2025-10-09 10:30:51.256027 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.256034 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.256042 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.256050 | orchestrator | 2025-10-09 10:30:51.256058 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-10-09 10:30:51.256066 | orchestrator | Thursday 09 October 2025 10:26:16 +0000 (0:00:01.918) 0:02:27.194 ****** 2025-10-09 10:30:51.256073 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.256081 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.256089 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.256097 | orchestrator | 2025-10-09 10:30:51.256105 | orchestrator | TASK [include_role : glance] *************************************************** 2025-10-09 10:30:51.256113 | orchestrator | Thursday 09 October 2025 10:26:17 +0000 (0:00:00.604) 0:02:27.799 ****** 2025-10-09 10:30:51.256120 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.256128 | orchestrator | 2025-10-09 10:30:51.256136 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-10-09 10:30:51.256152 | orchestrator | Thursday 09 October 2025 10:26:18 +0000 (0:00:00.939) 0:02:28.738 ****** 2025-10-09 10:30:51.256173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:30:51.256184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-10-09 10:30:51.256201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:30:51.256232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:30:51.256248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-10-09 10:30:51.256265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-10-09 10:30:51.256275 | orchestrator | 2025-10-09 10:30:51.256283 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-10-09 10:30:51.256291 | orchestrator | Thursday 09 October 2025 10:26:23 +0000 (0:00:05.038) 0:02:33.776 ****** 2025-10-09 10:30:51.256305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-09 10:30:51.256323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-10-09 10:30:51.256333 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.256342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-09 10:30:51.256368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-10-09 10:30:51.256377 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.256385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-09 10:30:51.256406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-10-09 10:30:51.256416 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.256424 | orchestrator | 2025-10-09 10:30:51.256432 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-10-09 10:30:51.256440 | orchestrator | Thursday 09 October 2025 10:26:26 +0000 (0:00:03.568) 0:02:37.345 ****** 2025-10-09 10:30:51.256451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-10-09 10:30:51.256460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-10-09 10:30:51.256469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-10-09 10:30:51.256477 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.256486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-10-09 10:30:51.256499 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.256507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-10-09 10:30:51.256523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-10-09 10:30:51.256531 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.256540 | orchestrator | 2025-10-09 10:30:51.256548 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-10-09 10:30:51.256556 | orchestrator | Thursday 09 October 2025 10:26:30 +0000 (0:00:03.325) 0:02:40.671 ****** 2025-10-09 10:30:51.256564 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.256572 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.256580 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.256588 | orchestrator | 2025-10-09 10:30:51.256596 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-10-09 10:30:51.256604 | orchestrator | Thursday 09 October 2025 10:26:31 +0000 (0:00:01.210) 0:02:41.881 ****** 2025-10-09 10:30:51.256612 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.256620 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.256628 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.256636 | orchestrator | 2025-10-09 10:30:51.256644 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-10-09 10:30:51.256652 | orchestrator | Thursday 09 October 2025 10:26:33 +0000 (0:00:02.128) 0:02:44.009 ****** 2025-10-09 10:30:51.256660 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.256668 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.256676 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.256684 | orchestrator | 2025-10-09 10:30:51.256692 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-10-09 10:30:51.256700 | orchestrator | Thursday 09 October 2025 10:26:33 +0000 (0:00:00.585) 0:02:44.595 ****** 2025-10-09 10:30:51.256712 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.256720 | orchestrator | 2025-10-09 10:30:51.256728 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-10-09 10:30:51.256736 | orchestrator | Thursday 09 October 2025 10:26:34 +0000 (0:00:00.929) 0:02:45.524 ****** 2025-10-09 10:30:51.256745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:30:51.256760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:30:51.256769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:30:51.256777 | orchestrator | 2025-10-09 10:30:51.256786 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-10-09 10:30:51.256794 | orchestrator | Thursday 09 October 2025 10:26:38 +0000 (0:00:03.267) 0:02:48.792 ****** 2025-10-09 10:30:51.256817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-09 10:30:51.256826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-09 10:30:51.256834 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.256842 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.256859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-09 10:30:51.256874 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.256882 | orchestrator | 2025-10-09 10:30:51.256890 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-10-09 10:30:51.256898 | orchestrator | Thursday 09 October 2025 10:26:38 +0000 (0:00:00.703) 0:02:49.496 ****** 2025-10-09 10:30:51.256906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-10-09 10:30:51.256915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-10-09 10:30:51.256923 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.256931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-10-09 10:30:51.256939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-10-09 10:30:51.256947 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.256955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-10-09 10:30:51.256963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-10-09 10:30:51.256971 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.256979 | orchestrator | 2025-10-09 10:30:51.256987 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-10-09 10:30:51.256995 | orchestrator | Thursday 09 October 2025 10:26:39 +0000 (0:00:00.813) 0:02:50.309 ****** 2025-10-09 10:30:51.257003 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.257011 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.257019 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.257027 | orchestrator | 2025-10-09 10:30:51.257035 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-10-09 10:30:51.257043 | orchestrator | Thursday 09 October 2025 10:26:40 +0000 (0:00:01.235) 0:02:51.545 ****** 2025-10-09 10:30:51.257051 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.257059 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.257066 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.257074 | orchestrator | 2025-10-09 10:30:51.257082 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-10-09 10:30:51.257098 | orchestrator | Thursday 09 October 2025 10:26:43 +0000 (0:00:02.241) 0:02:53.786 ****** 2025-10-09 10:30:51.257111 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.257121 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.257129 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.257137 | orchestrator | 2025-10-09 10:30:51.257145 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-10-09 10:30:51.257153 | orchestrator | Thursday 09 October 2025 10:26:43 +0000 (0:00:00.599) 0:02:54.386 ****** 2025-10-09 10:30:51.257161 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.257169 | orchestrator | 2025-10-09 10:30:51.257177 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-10-09 10:30:51.257185 | orchestrator | Thursday 09 October 2025 10:26:44 +0000 (0:00:00.901) 0:02:55.287 ****** 2025-10-09 10:30:51.257198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:30:51.257233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:30:51.257253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:30:51.257263 | orchestrator | 2025-10-09 10:30:51.257271 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-10-09 10:30:51.257279 | orchestrator | Thursday 09 October 2025 10:26:49 +0000 (0:00:04.458) 0:02:59.746 ****** 2025-10-09 10:30:51.257295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-09 10:30:51.257309 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.257322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-09 10:30:51.257331 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.257347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-09 10:30:51.257365 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.257373 | orchestrator | 2025-10-09 10:30:51.257381 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-10-09 10:30:51.257389 | orchestrator | Thursday 09 October 2025 10:26:50 +0000 (0:00:01.534) 0:03:01.281 ****** 2025-10-09 10:30:51.257397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-10-09 10:30:51.257407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-10-09 10:30:51.257416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-10-09 10:30:51.257424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-10-09 10:30:51.257434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-10-09 10:30:51.257442 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.257450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-10-09 10:30:51.257458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-10-09 10:30:51.257472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-10-09 10:30:51.257485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-10-09 10:30:51.257494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-10-09 10:30:51.257502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-10-09 10:30:51.257510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-10-09 10:30:51.257522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-10-09 10:30:51.257531 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.257539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-10-09 10:30:51.257547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-10-09 10:30:51.257555 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.257563 | orchestrator | 2025-10-09 10:30:51.257571 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-10-09 10:30:51.257579 | orchestrator | Thursday 09 October 2025 10:26:52 +0000 (0:00:01.435) 0:03:02.716 ****** 2025-10-09 10:30:51.257587 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.257595 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.257603 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.257611 | orchestrator | 2025-10-09 10:30:51.257619 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-10-09 10:30:51.257627 | orchestrator | Thursday 09 October 2025 10:26:53 +0000 (0:00:01.505) 0:03:04.222 ****** 2025-10-09 10:30:51.257635 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.257643 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.257651 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.257659 | orchestrator | 2025-10-09 10:30:51.257667 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-10-09 10:30:51.257675 | orchestrator | Thursday 09 October 2025 10:26:55 +0000 (0:00:02.372) 0:03:06.594 ****** 2025-10-09 10:30:51.257682 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.257690 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.257698 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.257706 | orchestrator | 2025-10-09 10:30:51.257714 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-10-09 10:30:51.257722 | orchestrator | Thursday 09 October 2025 10:26:56 +0000 (0:00:00.354) 0:03:06.948 ****** 2025-10-09 10:30:51.257730 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.257738 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.257751 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.257759 | orchestrator | 2025-10-09 10:30:51.257767 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-10-09 10:30:51.257775 | orchestrator | Thursday 09 October 2025 10:26:56 +0000 (0:00:00.650) 0:03:07.599 ****** 2025-10-09 10:30:51.257783 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.257791 | orchestrator | 2025-10-09 10:30:51.257799 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-10-09 10:30:51.257807 | orchestrator | Thursday 09 October 2025 10:26:57 +0000 (0:00:01.004) 0:03:08.603 ****** 2025-10-09 10:30:51.257822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:30:51.257832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:30:51.257844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:30:51.257853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:30:51.257863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:30:51.257883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:30:51.257892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:30:51.257906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:30:51.257915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:30:51.257923 | orchestrator | 2025-10-09 10:30:51.257931 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-10-09 10:30:51.257939 | orchestrator | Thursday 09 October 2025 10:27:02 +0000 (0:00:04.484) 0:03:13.088 ****** 2025-10-09 10:30:51.257947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-09 10:30:51.257961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:30:51.257975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:30:51.257984 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.257996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-09 10:30:51.258005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:30:51.258013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:30:51.258049 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.258060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-09 10:30:51.258075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:30:51.258084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:30:51.258092 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.258100 | orchestrator | 2025-10-09 10:30:51.258108 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-10-09 10:30:51.258116 | orchestrator | Thursday 09 October 2025 10:27:03 +0000 (0:00:00.993) 0:03:14.081 ****** 2025-10-09 10:30:51.258124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-10-09 10:30:51.258133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-10-09 10:30:51.258141 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.258150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-10-09 10:30:51.258158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-10-09 10:30:51.258170 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.258179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-10-09 10:30:51.258187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-10-09 10:30:51.258195 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.258203 | orchestrator | 2025-10-09 10:30:51.258211 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-10-09 10:30:51.258258 | orchestrator | Thursday 09 October 2025 10:27:04 +0000 (0:00:00.961) 0:03:15.042 ****** 2025-10-09 10:30:51.258267 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.258275 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.258283 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.258290 | orchestrator | 2025-10-09 10:30:51.258297 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-10-09 10:30:51.258303 | orchestrator | Thursday 09 October 2025 10:27:05 +0000 (0:00:01.374) 0:03:16.416 ****** 2025-10-09 10:30:51.258310 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.258317 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.258324 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.258331 | orchestrator | 2025-10-09 10:30:51.258337 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-10-09 10:30:51.258344 | orchestrator | Thursday 09 October 2025 10:27:07 +0000 (0:00:02.170) 0:03:18.587 ****** 2025-10-09 10:30:51.258351 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.258358 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.258364 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.258371 | orchestrator | 2025-10-09 10:30:51.258378 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-10-09 10:30:51.258384 | orchestrator | Thursday 09 October 2025 10:27:08 +0000 (0:00:00.621) 0:03:19.208 ****** 2025-10-09 10:30:51.258391 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.258398 | orchestrator | 2025-10-09 10:30:51.258404 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-10-09 10:30:51.258421 | orchestrator | Thursday 09 October 2025 10:27:09 +0000 (0:00:01.015) 0:03:20.224 ****** 2025-10-09 10:30:51.258440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:30:51.258451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.258463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:30:51.258471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.258484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:30:51.258491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.258498 | orchestrator | 2025-10-09 10:30:51.258505 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-10-09 10:30:51.258512 | orchestrator | Thursday 09 October 2025 10:27:13 +0000 (0:00:03.733) 0:03:23.958 ****** 2025-10-09 10:30:51.258527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:30:51.258534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.258541 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.258549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:30:51.258560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.258568 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.258575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:30:51.258589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.258596 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.258603 | orchestrator | 2025-10-09 10:30:51.258610 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-10-09 10:30:51.258617 | orchestrator | Thursday 09 October 2025 10:27:14 +0000 (0:00:01.095) 0:03:25.054 ****** 2025-10-09 10:30:51.258624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-10-09 10:30:51.258631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-10-09 10:30:51.258638 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.258645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-10-09 10:30:51.258652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-10-09 10:30:51.258659 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.258665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-10-09 10:30:51.258672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-10-09 10:30:51.258679 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.258686 | orchestrator | 2025-10-09 10:30:51.258692 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-10-09 10:30:51.258699 | orchestrator | Thursday 09 October 2025 10:27:15 +0000 (0:00:01.164) 0:03:26.219 ****** 2025-10-09 10:30:51.258706 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.258713 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.258719 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.258726 | orchestrator | 2025-10-09 10:30:51.258732 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-10-09 10:30:51.258739 | orchestrator | Thursday 09 October 2025 10:27:16 +0000 (0:00:01.395) 0:03:27.614 ****** 2025-10-09 10:30:51.258746 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.258753 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.258759 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.258766 | orchestrator | 2025-10-09 10:30:51.258773 | orchestrator | TASK [include_role : manila] *************************************************** 2025-10-09 10:30:51.258788 | orchestrator | Thursday 09 October 2025 10:27:19 +0000 (0:00:02.238) 0:03:29.853 ****** 2025-10-09 10:30:51.258795 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.258802 | orchestrator | 2025-10-09 10:30:51.258808 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-10-09 10:30:51.258815 | orchestrator | Thursday 09 October 2025 10:27:20 +0000 (0:00:01.348) 0:03:31.202 ****** 2025-10-09 10:30:51.258822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-10-09 10:30:51.258833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.258840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.258847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.258855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-10-09 10:30:51.258871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.258878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.258891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.258898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-10-09 10:30:51.258905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.258912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.258927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.258935 | orchestrator | 2025-10-09 10:30:51.258941 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-10-09 10:30:51.258948 | orchestrator | Thursday 09 October 2025 10:27:24 +0000 (0:00:04.324) 0:03:35.526 ****** 2025-10-09 10:30:51.258955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-10-09 10:30:51.258966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.258973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.258980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.258987 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.258994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-10-09 10:30:51.259016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.259023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.259034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.259041 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.259048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-10-09 10:30:51.259056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.259067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.259078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.259086 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.259093 | orchestrator | 2025-10-09 10:30:51.259100 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-10-09 10:30:51.259107 | orchestrator | Thursday 09 October 2025 10:27:25 +0000 (0:00:00.849) 0:03:36.375 ****** 2025-10-09 10:30:51.259113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-10-09 10:30:51.259120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-10-09 10:30:51.259127 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.259134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-10-09 10:30:51.259141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-10-09 10:30:51.259148 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.259158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-10-09 10:30:51.259165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-10-09 10:30:51.259171 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.259178 | orchestrator | 2025-10-09 10:30:51.259185 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-10-09 10:30:51.259192 | orchestrator | Thursday 09 October 2025 10:27:27 +0000 (0:00:01.365) 0:03:37.741 ****** 2025-10-09 10:30:51.259199 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.259205 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.259212 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.259232 | orchestrator | 2025-10-09 10:30:51.259239 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-10-09 10:30:51.259246 | orchestrator | Thursday 09 October 2025 10:27:28 +0000 (0:00:01.439) 0:03:39.180 ****** 2025-10-09 10:30:51.259253 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.259259 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.259266 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.259277 | orchestrator | 2025-10-09 10:30:51.259284 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-10-09 10:30:51.259290 | orchestrator | Thursday 09 October 2025 10:27:30 +0000 (0:00:02.224) 0:03:41.405 ****** 2025-10-09 10:30:51.259297 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.259304 | orchestrator | 2025-10-09 10:30:51.259310 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-10-09 10:30:51.259317 | orchestrator | Thursday 09 October 2025 10:27:32 +0000 (0:00:01.360) 0:03:42.766 ****** 2025-10-09 10:30:51.259324 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-09 10:30:51.259331 | orchestrator | 2025-10-09 10:30:51.259338 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-10-09 10:30:51.259344 | orchestrator | Thursday 09 October 2025 10:27:35 +0000 (0:00:02.977) 0:03:45.744 ****** 2025-10-09 10:30:51.259356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:30:51.259365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-10-09 10:30:51.259376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:30:51.259388 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.259395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-10-09 10:30:51.259402 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.259419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:30:51.259427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-10-09 10:30:51.259438 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.259445 | orchestrator | 2025-10-09 10:30:51.259452 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-10-09 10:30:51.259459 | orchestrator | Thursday 09 October 2025 10:27:37 +0000 (0:00:02.273) 0:03:48.018 ****** 2025-10-09 10:30:51.259471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:30:51.259479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-10-09 10:30:51.259486 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.259496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:30:51.259508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-10-09 10:30:51.259516 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.259528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:30:51.259535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-10-09 10:30:51.259551 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.259558 | orchestrator | 2025-10-09 10:30:51.259565 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-10-09 10:30:51.259572 | orchestrator | Thursday 09 October 2025 10:27:39 +0000 (0:00:02.471) 0:03:50.490 ****** 2025-10-09 10:30:51.259579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-10-09 10:30:51.259587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-10-09 10:30:51.259594 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.259601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-10-09 10:30:51.259608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-10-09 10:30:51.259619 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.259626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-10-09 10:30:51.259633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-10-09 10:30:51.259644 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.259650 | orchestrator | 2025-10-09 10:30:51.259657 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-10-09 10:30:51.259664 | orchestrator | Thursday 09 October 2025 10:27:42 +0000 (0:00:02.862) 0:03:53.353 ****** 2025-10-09 10:30:51.259671 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.259677 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.259687 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.259694 | orchestrator | 2025-10-09 10:30:51.259700 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-10-09 10:30:51.259707 | orchestrator | Thursday 09 October 2025 10:27:44 +0000 (0:00:01.878) 0:03:55.231 ****** 2025-10-09 10:30:51.259714 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.259721 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.259727 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.259734 | orchestrator | 2025-10-09 10:30:51.259740 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-10-09 10:30:51.259747 | orchestrator | Thursday 09 October 2025 10:27:46 +0000 (0:00:01.538) 0:03:56.770 ****** 2025-10-09 10:30:51.259754 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.259760 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.259767 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.259773 | orchestrator | 2025-10-09 10:30:51.259780 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-10-09 10:30:51.259787 | orchestrator | Thursday 09 October 2025 10:27:46 +0000 (0:00:00.366) 0:03:57.136 ****** 2025-10-09 10:30:51.259793 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.259800 | orchestrator | 2025-10-09 10:30:51.259807 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-10-09 10:30:51.259813 | orchestrator | Thursday 09 October 2025 10:27:47 +0000 (0:00:01.407) 0:03:58.544 ****** 2025-10-09 10:30:51.259820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-10-09 10:30:51.259828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-10-09 10:30:51.259840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-10-09 10:30:51.259852 | orchestrator | 2025-10-09 10:30:51.259859 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-10-09 10:30:51.259866 | orchestrator | Thursday 09 October 2025 10:27:49 +0000 (0:00:01.550) 0:04:00.095 ****** 2025-10-09 10:30:51.259876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-10-09 10:30:51.259883 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.259890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-10-09 10:30:51.259897 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.259904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-10-09 10:30:51.259910 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.259917 | orchestrator | 2025-10-09 10:30:51.259924 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-10-09 10:30:51.259931 | orchestrator | Thursday 09 October 2025 10:27:49 +0000 (0:00:00.407) 0:04:00.503 ****** 2025-10-09 10:30:51.259937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-10-09 10:30:51.259944 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.259951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-10-09 10:30:51.259962 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.259973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-10-09 10:30:51.259980 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.259987 | orchestrator | 2025-10-09 10:30:51.259994 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-10-09 10:30:51.260000 | orchestrator | Thursday 09 October 2025 10:27:50 +0000 (0:00:00.919) 0:04:01.422 ****** 2025-10-09 10:30:51.260007 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.260014 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.260020 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.260027 | orchestrator | 2025-10-09 10:30:51.260034 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-10-09 10:30:51.260040 | orchestrator | Thursday 09 October 2025 10:27:51 +0000 (0:00:00.461) 0:04:01.884 ****** 2025-10-09 10:30:51.260047 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.260054 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.260060 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.260067 | orchestrator | 2025-10-09 10:30:51.260074 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-10-09 10:30:51.260080 | orchestrator | Thursday 09 October 2025 10:27:52 +0000 (0:00:01.357) 0:04:03.241 ****** 2025-10-09 10:30:51.260087 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.260094 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.260100 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.260107 | orchestrator | 2025-10-09 10:30:51.260113 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-10-09 10:30:51.260120 | orchestrator | Thursday 09 October 2025 10:27:52 +0000 (0:00:00.325) 0:04:03.566 ****** 2025-10-09 10:30:51.260127 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.260133 | orchestrator | 2025-10-09 10:30:51.260140 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-10-09 10:30:51.260147 | orchestrator | Thursday 09 October 2025 10:27:54 +0000 (0:00:01.469) 0:04:05.036 ****** 2025-10-09 10:30:51.260157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:30:51.260164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-10-09 10:30:51.260372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:30:51.260388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:30:51.260396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:30:51.260420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:30:51.260431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-10-09 10:30:51.260456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:30:51.260474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-10-09 10:30:51.260499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-10-09 10:30:51.260513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:30:51.260524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:30:51.260539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:30:51.260549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:30:51.260571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:30:51.260581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-10-09 10:30:51.260606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:30:51.260624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-10-09 10:30:51.260653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-10-09 10:30:51.260660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:30:51.260678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:30:51.260685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:30:51.260696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:30:51.260713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-10-09 10:30:51.260731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:30:51.260738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-10-09 10:30:51.260756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:30:51.260763 | orchestrator | 2025-10-09 10:30:51.260769 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-10-09 10:30:51.260776 | orchestrator | Thursday 09 October 2025 10:27:58 +0000 (0:00:04.395) 0:04:09.431 ****** 2025-10-09 10:30:51.260786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:30:51.260798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-10-09 10:30:51.260832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:30:51.260851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:30:51.260858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:30:51.260876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-10-09 10:30:51.260897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:30:51.260904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-10-09 10:30:51.260921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:30:51.260928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:30:51.260936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260948 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.260956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.260964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.261020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-10-09 10:30:51.261039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.261047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:30:51.261065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:30:51.261074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.261082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:30:51.261089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.261101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.261109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.261125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:30:51.261134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-10-09 10:30:51.261142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.261150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-10-09 10:30:51.261161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.261169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:30:51.261182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:30:51.261193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:30:51.261201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.261209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.261261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-10-09 10:30:51.261276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:30:51.261289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:30:51.261301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.261309 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.261317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-10-09 10:30:51.261324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-10-09 10:30:51.261331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.261342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-10-09 10:30:51.261355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:30:51.261362 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.261369 | orchestrator | 2025-10-09 10:30:51.261376 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-10-09 10:30:51.261382 | orchestrator | Thursday 09 October 2025 10:28:00 +0000 (0:00:01.632) 0:04:11.064 ****** 2025-10-09 10:30:51.261392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-10-09 10:30:51.261399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-10-09 10:30:51.261406 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.261412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-10-09 10:30:51.261418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-10-09 10:30:51.261424 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.261431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-10-09 10:30:51.261437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-10-09 10:30:51.261443 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.261449 | orchestrator | 2025-10-09 10:30:51.261456 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-10-09 10:30:51.261462 | orchestrator | Thursday 09 October 2025 10:28:02 +0000 (0:00:02.406) 0:04:13.471 ****** 2025-10-09 10:30:51.261468 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.261474 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.261480 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.261486 | orchestrator | 2025-10-09 10:30:51.261493 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-10-09 10:30:51.261499 | orchestrator | Thursday 09 October 2025 10:28:04 +0000 (0:00:01.365) 0:04:14.837 ****** 2025-10-09 10:30:51.261505 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.261511 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.261517 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.261523 | orchestrator | 2025-10-09 10:30:51.261529 | orchestrator | TASK [include_role : placement] ************************************************ 2025-10-09 10:30:51.261536 | orchestrator | Thursday 09 October 2025 10:28:06 +0000 (0:00:02.237) 0:04:17.074 ****** 2025-10-09 10:30:51.261546 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.261552 | orchestrator | 2025-10-09 10:30:51.261559 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-10-09 10:30:51.261565 | orchestrator | Thursday 09 October 2025 10:28:07 +0000 (0:00:01.239) 0:04:18.314 ****** 2025-10-09 10:30:51.261669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:30:51.261680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:30:51.261690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:30:51.261697 | orchestrator | 2025-10-09 10:30:51.261703 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-10-09 10:30:51.261710 | orchestrator | Thursday 09 October 2025 10:28:11 +0000 (0:00:03.871) 0:04:22.186 ****** 2025-10-09 10:30:51.261716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:30:51.261728 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.261737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:30:51.261744 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.261751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:30:51.261757 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.261763 | orchestrator | 2025-10-09 10:30:51.261769 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-10-09 10:30:51.261775 | orchestrator | Thursday 09 October 2025 10:28:12 +0000 (0:00:00.561) 0:04:22.747 ****** 2025-10-09 10:30:51.261787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-10-09 10:30:51.261794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-10-09 10:30:51.261802 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.261808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-10-09 10:30:51.261815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-10-09 10:30:51.261821 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.261827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-10-09 10:30:51.261834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-10-09 10:30:51.261844 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.261850 | orchestrator | 2025-10-09 10:30:51.261856 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-10-09 10:30:51.261862 | orchestrator | Thursday 09 October 2025 10:28:12 +0000 (0:00:00.806) 0:04:23.553 ****** 2025-10-09 10:30:51.261869 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.261875 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.261881 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.261887 | orchestrator | 2025-10-09 10:30:51.261893 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-10-09 10:30:51.261899 | orchestrator | Thursday 09 October 2025 10:28:14 +0000 (0:00:01.443) 0:04:24.997 ****** 2025-10-09 10:30:51.261905 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.261911 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.261917 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.261923 | orchestrator | 2025-10-09 10:30:51.261930 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-10-09 10:30:51.261936 | orchestrator | Thursday 09 October 2025 10:28:16 +0000 (0:00:02.271) 0:04:27.268 ****** 2025-10-09 10:30:51.261942 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.261948 | orchestrator | 2025-10-09 10:30:51.261954 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-10-09 10:30:51.261960 | orchestrator | Thursday 09 October 2025 10:28:18 +0000 (0:00:01.634) 0:04:28.903 ****** 2025-10-09 10:30:51.261971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:30:51.261981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.261989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.262000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:30:51.262007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.262036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:30:51.262048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.262055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.262066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.262072 | orchestrator | 2025-10-09 10:30:51.262079 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-10-09 10:30:51.262085 | orchestrator | Thursday 09 October 2025 10:28:22 +0000 (0:00:04.527) 0:04:33.430 ****** 2025-10-09 10:30:51.262095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:30:51.262103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.262110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.262116 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.262126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:30:51.262137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.262144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.262150 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.262161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:30:51.262177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.262187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.262194 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.262200 | orchestrator | 2025-10-09 10:30:51.262207 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-10-09 10:30:51.262213 | orchestrator | Thursday 09 October 2025 10:28:23 +0000 (0:00:01.037) 0:04:34.468 ****** 2025-10-09 10:30:51.262232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-10-09 10:30:51.262239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-10-09 10:30:51.262246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-10-09 10:30:51.262253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-10-09 10:30:51.262259 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.262265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-10-09 10:30:51.262272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-10-09 10:30:51.262279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-10-09 10:30:51.262285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-10-09 10:30:51.262296 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.262304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-10-09 10:30:51.262311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-10-09 10:30:51.262318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-10-09 10:30:51.262325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-10-09 10:30:51.262337 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.262344 | orchestrator | 2025-10-09 10:30:51.262351 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-10-09 10:30:51.262358 | orchestrator | Thursday 09 October 2025 10:28:25 +0000 (0:00:01.315) 0:04:35.784 ****** 2025-10-09 10:30:51.262365 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.262372 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.262379 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.262386 | orchestrator | 2025-10-09 10:30:51.262393 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-10-09 10:30:51.262403 | orchestrator | Thursday 09 October 2025 10:28:26 +0000 (0:00:01.330) 0:04:37.114 ****** 2025-10-09 10:30:51.262410 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.262417 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.262424 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.262431 | orchestrator | 2025-10-09 10:30:51.262438 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-10-09 10:30:51.262445 | orchestrator | Thursday 09 October 2025 10:28:28 +0000 (0:00:02.162) 0:04:39.277 ****** 2025-10-09 10:30:51.262452 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.262459 | orchestrator | 2025-10-09 10:30:51.262466 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-10-09 10:30:51.262473 | orchestrator | Thursday 09 October 2025 10:28:30 +0000 (0:00:01.672) 0:04:40.949 ****** 2025-10-09 10:30:51.262481 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-10-09 10:30:51.262488 | orchestrator | 2025-10-09 10:30:51.262495 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-10-09 10:30:51.262502 | orchestrator | Thursday 09 October 2025 10:28:31 +0000 (0:00:00.912) 0:04:41.861 ****** 2025-10-09 10:30:51.262509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-10-09 10:30:51.262517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-10-09 10:30:51.262524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-10-09 10:30:51.262532 | orchestrator | 2025-10-09 10:30:51.262539 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-10-09 10:30:51.262546 | orchestrator | Thursday 09 October 2025 10:28:35 +0000 (0:00:04.629) 0:04:46.491 ****** 2025-10-09 10:30:51.262557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-09 10:30:51.262568 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.262575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-09 10:30:51.262582 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.262593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-09 10:30:51.262600 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.262607 | orchestrator | 2025-10-09 10:30:51.262614 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-10-09 10:30:51.262621 | orchestrator | Thursday 09 October 2025 10:28:37 +0000 (0:00:01.471) 0:04:47.962 ****** 2025-10-09 10:30:51.262628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-10-09 10:30:51.262636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-10-09 10:30:51.262643 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.262649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-10-09 10:30:51.262656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-10-09 10:30:51.262662 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.262668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-10-09 10:30:51.262675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-10-09 10:30:51.262681 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.262688 | orchestrator | 2025-10-09 10:30:51.262694 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-10-09 10:30:51.262700 | orchestrator | Thursday 09 October 2025 10:28:38 +0000 (0:00:01.673) 0:04:49.636 ****** 2025-10-09 10:30:51.262711 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.262717 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.262723 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.262729 | orchestrator | 2025-10-09 10:30:51.262736 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-10-09 10:30:51.262742 | orchestrator | Thursday 09 October 2025 10:28:41 +0000 (0:00:02.564) 0:04:52.200 ****** 2025-10-09 10:30:51.262749 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.262755 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.262761 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.262767 | orchestrator | 2025-10-09 10:30:51.262774 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-10-09 10:30:51.262780 | orchestrator | Thursday 09 October 2025 10:28:44 +0000 (0:00:02.994) 0:04:55.194 ****** 2025-10-09 10:30:51.262789 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-10-09 10:30:51.262795 | orchestrator | 2025-10-09 10:30:51.262802 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-10-09 10:30:51.262808 | orchestrator | Thursday 09 October 2025 10:28:46 +0000 (0:00:01.507) 0:04:56.702 ****** 2025-10-09 10:30:51.262815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-09 10:30:51.262821 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.262828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-09 10:30:51.262834 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.262846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-09 10:30:51.262853 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.262859 | orchestrator | 2025-10-09 10:30:51.262865 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-10-09 10:30:51.262872 | orchestrator | Thursday 09 October 2025 10:28:47 +0000 (0:00:01.366) 0:04:58.069 ****** 2025-10-09 10:30:51.262878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-09 10:30:51.262888 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.262895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-09 10:30:51.262901 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.262908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-10-09 10:30:51.262914 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.262920 | orchestrator | 2025-10-09 10:30:51.262927 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-10-09 10:30:51.262933 | orchestrator | Thursday 09 October 2025 10:28:48 +0000 (0:00:01.389) 0:04:59.458 ****** 2025-10-09 10:30:51.262939 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.262946 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.262952 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.262958 | orchestrator | 2025-10-09 10:30:51.262967 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-10-09 10:30:51.262973 | orchestrator | Thursday 09 October 2025 10:28:50 +0000 (0:00:01.888) 0:05:01.347 ****** 2025-10-09 10:30:51.262980 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:30:51.262986 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:30:51.262993 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:30:51.262999 | orchestrator | 2025-10-09 10:30:51.263005 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-10-09 10:30:51.263011 | orchestrator | Thursday 09 October 2025 10:28:53 +0000 (0:00:02.499) 0:05:03.847 ****** 2025-10-09 10:30:51.263018 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:30:51.263024 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:30:51.263030 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:30:51.263036 | orchestrator | 2025-10-09 10:30:51.263042 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-10-09 10:30:51.263049 | orchestrator | Thursday 09 October 2025 10:28:56 +0000 (0:00:03.187) 0:05:07.034 ****** 2025-10-09 10:30:51.263055 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-10-09 10:30:51.263061 | orchestrator | 2025-10-09 10:30:51.263067 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-10-09 10:30:51.263074 | orchestrator | Thursday 09 October 2025 10:28:57 +0000 (0:00:00.872) 0:05:07.907 ****** 2025-10-09 10:30:51.263083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-10-09 10:30:51.263090 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.263096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-10-09 10:30:51.263107 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.263113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-10-09 10:30:51.263120 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.263126 | orchestrator | 2025-10-09 10:30:51.263132 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-10-09 10:30:51.263139 | orchestrator | Thursday 09 October 2025 10:28:58 +0000 (0:00:01.394) 0:05:09.301 ****** 2025-10-09 10:30:51.263145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-10-09 10:30:51.263151 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.263158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-10-09 10:30:51.263164 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.263174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-10-09 10:30:51.263180 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.263187 | orchestrator | 2025-10-09 10:30:51.263193 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-10-09 10:30:51.263199 | orchestrator | Thursday 09 October 2025 10:29:00 +0000 (0:00:01.458) 0:05:10.760 ****** 2025-10-09 10:30:51.263205 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.263212 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.263263 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.263270 | orchestrator | 2025-10-09 10:30:51.263276 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-10-09 10:30:51.263282 | orchestrator | Thursday 09 October 2025 10:29:01 +0000 (0:00:01.699) 0:05:12.459 ****** 2025-10-09 10:30:51.263288 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:30:51.263295 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:30:51.263305 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:30:51.263311 | orchestrator | 2025-10-09 10:30:51.263318 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-10-09 10:30:51.263324 | orchestrator | Thursday 09 October 2025 10:29:04 +0000 (0:00:02.770) 0:05:15.230 ****** 2025-10-09 10:30:51.263330 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:30:51.263337 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:30:51.263343 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:30:51.263349 | orchestrator | 2025-10-09 10:30:51.263358 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-10-09 10:30:51.263365 | orchestrator | Thursday 09 October 2025 10:29:08 +0000 (0:00:03.461) 0:05:18.691 ****** 2025-10-09 10:30:51.263371 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.263377 | orchestrator | 2025-10-09 10:30:51.263384 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-10-09 10:30:51.263390 | orchestrator | Thursday 09 October 2025 10:29:09 +0000 (0:00:01.694) 0:05:20.386 ****** 2025-10-09 10:30:51.263397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:30:51.263404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:30:51.263414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:30:51.263421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:30:51.263434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:30:51.263441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:30:51.263448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:30:51.263455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:30:51.263461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:30:51.263470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:30:51.263481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.263490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:30:51.263497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.263504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:30:51.263510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.263517 | orchestrator | 2025-10-09 10:30:51.263523 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-10-09 10:30:51.263529 | orchestrator | Thursday 09 October 2025 10:29:13 +0000 (0:00:03.485) 0:05:23.871 ****** 2025-10-09 10:30:51.263539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:30:51.263550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:30:51.263559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:30:51.263566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:30:51.263573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.263579 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.263586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:30:51.263595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:30:51.263605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:30:51.263612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:30:51.263621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.263627 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.263634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:30:51.263641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:30:51.263647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:30:51.263661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:30:51.263668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:30:51.263674 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.263680 | orchestrator | 2025-10-09 10:30:51.263687 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-10-09 10:30:51.263693 | orchestrator | Thursday 09 October 2025 10:29:14 +0000 (0:00:00.782) 0:05:24.654 ****** 2025-10-09 10:30:51.263704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-10-09 10:30:51.263711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-10-09 10:30:51.263717 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.263723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-10-09 10:30:51.263730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-10-09 10:30:51.263736 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.263743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-10-09 10:30:51.263749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-10-09 10:30:51.263755 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.263762 | orchestrator | 2025-10-09 10:30:51.263768 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-10-09 10:30:51.263774 | orchestrator | Thursday 09 October 2025 10:29:15 +0000 (0:00:01.689) 0:05:26.344 ****** 2025-10-09 10:30:51.263780 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.263786 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.263793 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.263799 | orchestrator | 2025-10-09 10:30:51.263805 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-10-09 10:30:51.263811 | orchestrator | Thursday 09 October 2025 10:29:17 +0000 (0:00:01.526) 0:05:27.870 ****** 2025-10-09 10:30:51.263821 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.263827 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.263833 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.263840 | orchestrator | 2025-10-09 10:30:51.263846 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-10-09 10:30:51.263852 | orchestrator | Thursday 09 October 2025 10:29:19 +0000 (0:00:02.223) 0:05:30.094 ****** 2025-10-09 10:30:51.263858 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.263864 | orchestrator | 2025-10-09 10:30:51.263871 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-10-09 10:30:51.263877 | orchestrator | Thursday 09 October 2025 10:29:20 +0000 (0:00:01.378) 0:05:31.473 ****** 2025-10-09 10:30:51.263948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:30:51.263957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:30:51.263968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:30:51.263975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:30:51.264005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:30:51.264014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:30:51.264021 | orchestrator | 2025-10-09 10:30:51.264028 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-10-09 10:30:51.264034 | orchestrator | Thursday 09 October 2025 10:29:26 +0000 (0:00:05.912) 0:05:37.386 ****** 2025-10-09 10:30:51.264044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-09 10:30:51.264051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-09 10:30:51.264062 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.264069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-09 10:30:51.264094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-09 10:30:51.264102 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.264112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-09 10:30:51.264119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-09 10:30:51.264130 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.264137 | orchestrator | 2025-10-09 10:30:51.264143 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-10-09 10:30:51.264149 | orchestrator | Thursday 09 October 2025 10:29:27 +0000 (0:00:00.653) 0:05:38.039 ****** 2025-10-09 10:30:51.264156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-10-09 10:30:51.264162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-10-09 10:30:51.264168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-10-09 10:30:51.264175 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.264181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-10-09 10:30:51.264204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-10-09 10:30:51.264211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-10-09 10:30:51.264231 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.264238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-10-09 10:30:51.264244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-10-09 10:30:51.264251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-10-09 10:30:51.264257 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.264263 | orchestrator | 2025-10-09 10:30:51.264269 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-10-09 10:30:51.264279 | orchestrator | Thursday 09 October 2025 10:29:28 +0000 (0:00:00.984) 0:05:39.023 ****** 2025-10-09 10:30:51.264286 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.264292 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.264298 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.264308 | orchestrator | 2025-10-09 10:30:51.264314 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-10-09 10:30:51.264325 | orchestrator | Thursday 09 October 2025 10:29:29 +0000 (0:00:00.860) 0:05:39.884 ****** 2025-10-09 10:30:51.264331 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.264338 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.264344 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.264350 | orchestrator | 2025-10-09 10:30:51.264357 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-10-09 10:30:51.264363 | orchestrator | Thursday 09 October 2025 10:29:30 +0000 (0:00:01.467) 0:05:41.351 ****** 2025-10-09 10:30:51.264369 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.264376 | orchestrator | 2025-10-09 10:30:51.264382 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-10-09 10:30:51.264388 | orchestrator | Thursday 09 October 2025 10:29:32 +0000 (0:00:01.576) 0:05:42.928 ****** 2025-10-09 10:30:51.264395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-10-09 10:30:51.264402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:30:51.264408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-10-09 10:30:51.264441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:30:51.264463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:30:51.264469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-10-09 10:30:51.264507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:30:51.264515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:30:51.264529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:30:51.264549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-10-09 10:30:51.264559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-10-09 10:30:51.264566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:30:51.264594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-10-09 10:30:51.264601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-10-09 10:30:51.264611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:30:51.264634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-10-09 10:30:51.264641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-10-09 10:30:51.264684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:30:51.264714 | orchestrator | 2025-10-09 10:30:51.264720 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-10-09 10:30:51.264730 | orchestrator | Thursday 09 October 2025 10:29:37 +0000 (0:00:04.849) 0:05:47.778 ****** 2025-10-09 10:30:51.264737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-10-09 10:30:51.264746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:30:51.264753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:30:51.264776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-10-09 10:30:51.264788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-10-09 10:30:51.264798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:30:51.264818 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.264824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-10-09 10:30:51.264831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:30:51.264840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:30:51.264871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-10-09 10:30:51.264879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-10-09 10:30:51.264885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-10-09 10:30:51.264900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:30:51.264913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:30:51.264936 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.264942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:30:51.264959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-10-09 10:30:51.264971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-10-09 10:30:51.264981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:30:51.264994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:30:51.265000 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.265007 | orchestrator | 2025-10-09 10:30:51.265013 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-10-09 10:30:51.265019 | orchestrator | Thursday 09 October 2025 10:29:38 +0000 (0:00:01.384) 0:05:49.162 ****** 2025-10-09 10:30:51.265026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-10-09 10:30:51.265032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-10-09 10:30:51.265039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-10-09 10:30:51.265050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-10-09 10:30:51.265057 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.265068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-10-09 10:30:51.265083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-10-09 10:30:51.265094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-10-09 10:30:51.265105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-10-09 10:30:51.265116 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.265127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-10-09 10:30:51.265136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-10-09 10:30:51.265152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-10-09 10:30:51.265163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-10-09 10:30:51.265173 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.265184 | orchestrator | 2025-10-09 10:30:51.265195 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-10-09 10:30:51.265206 | orchestrator | Thursday 09 October 2025 10:29:39 +0000 (0:00:01.142) 0:05:50.304 ****** 2025-10-09 10:30:51.265258 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.265267 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.265273 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.265280 | orchestrator | 2025-10-09 10:30:51.265286 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-10-09 10:30:51.265292 | orchestrator | Thursday 09 October 2025 10:29:40 +0000 (0:00:00.460) 0:05:50.765 ****** 2025-10-09 10:30:51.265299 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.265305 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.265311 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.265317 | orchestrator | 2025-10-09 10:30:51.265330 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-10-09 10:30:51.265337 | orchestrator | Thursday 09 October 2025 10:29:41 +0000 (0:00:01.539) 0:05:52.304 ****** 2025-10-09 10:30:51.265343 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.265349 | orchestrator | 2025-10-09 10:30:51.265355 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-10-09 10:30:51.265362 | orchestrator | Thursday 09 October 2025 10:29:43 +0000 (0:00:01.897) 0:05:54.202 ****** 2025-10-09 10:30:51.265368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:30:51.265381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:30:51.265392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-10-09 10:30:51.265398 | orchestrator | 2025-10-09 10:30:51.265404 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-10-09 10:30:51.265410 | orchestrator | Thursday 09 October 2025 10:29:46 +0000 (0:00:02.695) 0:05:56.898 ****** 2025-10-09 10:30:51.265415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-10-09 10:30:51.265425 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.265431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-10-09 10:30:51.265437 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.265446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-10-09 10:30:51.265452 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.265458 | orchestrator | 2025-10-09 10:30:51.265463 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-10-09 10:30:51.265469 | orchestrator | Thursday 09 October 2025 10:29:46 +0000 (0:00:00.428) 0:05:57.327 ****** 2025-10-09 10:30:51.265475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-10-09 10:30:51.265480 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.265489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-10-09 10:30:51.265494 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.265500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-10-09 10:30:51.265505 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.265511 | orchestrator | 2025-10-09 10:30:51.265516 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-10-09 10:30:51.265527 | orchestrator | Thursday 09 October 2025 10:29:47 +0000 (0:00:00.922) 0:05:58.249 ****** 2025-10-09 10:30:51.265532 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.265538 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.265543 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.265549 | orchestrator | 2025-10-09 10:30:51.265554 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-10-09 10:30:51.265559 | orchestrator | Thursday 09 October 2025 10:29:48 +0000 (0:00:00.461) 0:05:58.710 ****** 2025-10-09 10:30:51.265565 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.265570 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.265576 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.265581 | orchestrator | 2025-10-09 10:30:51.265587 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-10-09 10:30:51.265592 | orchestrator | Thursday 09 October 2025 10:29:49 +0000 (0:00:01.258) 0:05:59.969 ****** 2025-10-09 10:30:51.265598 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:30:51.265603 | orchestrator | 2025-10-09 10:30:51.265609 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-10-09 10:30:51.265614 | orchestrator | Thursday 09 October 2025 10:29:51 +0000 (0:00:01.694) 0:06:01.663 ****** 2025-10-09 10:30:51.265620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-10-09 10:30:51.265629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-10-09 10:30:51.265635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-10-09 10:30:51.265648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-10-09 10:30:51.265655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-10-09 10:30:51.265661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-10-09 10:30:51.265667 | orchestrator | 2025-10-09 10:30:51.265675 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-10-09 10:30:51.265681 | orchestrator | Thursday 09 October 2025 10:29:57 +0000 (0:00:06.344) 0:06:08.007 ****** 2025-10-09 10:30:51.265687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-10-09 10:30:51.265699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-10-09 10:30:51.265705 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.265710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-10-09 10:30:51.265716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-10-09 10:30:51.265722 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.265730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-10-09 10:30:51.265743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-10-09 10:30:51.265749 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.265754 | orchestrator | 2025-10-09 10:30:51.265760 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-10-09 10:30:51.265766 | orchestrator | Thursday 09 October 2025 10:29:58 +0000 (0:00:00.671) 0:06:08.679 ****** 2025-10-09 10:30:51.265771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-10-09 10:30:51.265777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-10-09 10:30:51.265783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-10-09 10:30:51.265789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-10-09 10:30:51.265794 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.265800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-10-09 10:30:51.265805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-10-09 10:30:51.265811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-10-09 10:30:51.265817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-10-09 10:30:51.265822 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.265828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-10-09 10:30:51.265836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-10-09 10:30:51.265842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-10-09 10:30:51.265858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-10-09 10:30:51.265863 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.265869 | orchestrator | 2025-10-09 10:30:51.265874 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-10-09 10:30:51.265880 | orchestrator | Thursday 09 October 2025 10:29:59 +0000 (0:00:01.817) 0:06:10.497 ****** 2025-10-09 10:30:51.265885 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.265891 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.265896 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.265902 | orchestrator | 2025-10-09 10:30:51.265907 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-10-09 10:30:51.265913 | orchestrator | Thursday 09 October 2025 10:30:01 +0000 (0:00:01.359) 0:06:11.856 ****** 2025-10-09 10:30:51.265918 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.265924 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.265929 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.265935 | orchestrator | 2025-10-09 10:30:51.265940 | orchestrator | TASK [include_role : swift] **************************************************** 2025-10-09 10:30:51.265946 | orchestrator | Thursday 09 October 2025 10:30:03 +0000 (0:00:02.438) 0:06:14.294 ****** 2025-10-09 10:30:51.265951 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.265959 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.265965 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.265971 | orchestrator | 2025-10-09 10:30:51.265976 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-10-09 10:30:51.265981 | orchestrator | Thursday 09 October 2025 10:30:04 +0000 (0:00:00.379) 0:06:14.674 ****** 2025-10-09 10:30:51.265987 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.265992 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.265998 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.266003 | orchestrator | 2025-10-09 10:30:51.266009 | orchestrator | TASK [include_role : trove] **************************************************** 2025-10-09 10:30:51.266033 | orchestrator | Thursday 09 October 2025 10:30:04 +0000 (0:00:00.329) 0:06:15.004 ****** 2025-10-09 10:30:51.266039 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.266045 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.266051 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.266057 | orchestrator | 2025-10-09 10:30:51.266062 | orchestrator | TASK [include_role : venus] **************************************************** 2025-10-09 10:30:51.266067 | orchestrator | Thursday 09 October 2025 10:30:05 +0000 (0:00:00.676) 0:06:15.681 ****** 2025-10-09 10:30:51.266073 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.266078 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.266083 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.266089 | orchestrator | 2025-10-09 10:30:51.266094 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-10-09 10:30:51.266100 | orchestrator | Thursday 09 October 2025 10:30:05 +0000 (0:00:00.355) 0:06:16.036 ****** 2025-10-09 10:30:51.266105 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.266110 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.266116 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.266121 | orchestrator | 2025-10-09 10:30:51.266126 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-10-09 10:30:51.266132 | orchestrator | Thursday 09 October 2025 10:30:05 +0000 (0:00:00.338) 0:06:16.375 ****** 2025-10-09 10:30:51.266137 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.266142 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.266148 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.266153 | orchestrator | 2025-10-09 10:30:51.266158 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-10-09 10:30:51.266168 | orchestrator | Thursday 09 October 2025 10:30:06 +0000 (0:00:01.077) 0:06:17.452 ****** 2025-10-09 10:30:51.266174 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:30:51.266179 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:30:51.266184 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:30:51.266190 | orchestrator | 2025-10-09 10:30:51.266195 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-10-09 10:30:51.266200 | orchestrator | Thursday 09 October 2025 10:30:07 +0000 (0:00:00.698) 0:06:18.151 ****** 2025-10-09 10:30:51.266206 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:30:51.266211 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:30:51.266232 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:30:51.266238 | orchestrator | 2025-10-09 10:30:51.266243 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-10-09 10:30:51.266249 | orchestrator | Thursday 09 October 2025 10:30:07 +0000 (0:00:00.358) 0:06:18.509 ****** 2025-10-09 10:30:51.266254 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:30:51.266260 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:30:51.266265 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:30:51.266271 | orchestrator | 2025-10-09 10:30:51.266276 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-10-09 10:30:51.266282 | orchestrator | Thursday 09 October 2025 10:30:08 +0000 (0:00:00.970) 0:06:19.480 ****** 2025-10-09 10:30:51.266287 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:30:51.266293 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:30:51.266298 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:30:51.266303 | orchestrator | 2025-10-09 10:30:51.266309 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-10-09 10:30:51.266314 | orchestrator | Thursday 09 October 2025 10:30:10 +0000 (0:00:01.272) 0:06:20.752 ****** 2025-10-09 10:30:51.266320 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:30:51.266325 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:30:51.266334 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:30:51.266339 | orchestrator | 2025-10-09 10:30:51.266345 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-10-09 10:30:51.266350 | orchestrator | Thursday 09 October 2025 10:30:11 +0000 (0:00:00.975) 0:06:21.728 ****** 2025-10-09 10:30:51.266356 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.266361 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.266367 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.266372 | orchestrator | 2025-10-09 10:30:51.266378 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-10-09 10:30:51.266383 | orchestrator | Thursday 09 October 2025 10:30:19 +0000 (0:00:08.606) 0:06:30.335 ****** 2025-10-09 10:30:51.266389 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:30:51.266394 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:30:51.266399 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:30:51.266405 | orchestrator | 2025-10-09 10:30:51.266410 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-10-09 10:30:51.266416 | orchestrator | Thursday 09 October 2025 10:30:20 +0000 (0:00:00.777) 0:06:31.112 ****** 2025-10-09 10:30:51.266421 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.266427 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.266432 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.266437 | orchestrator | 2025-10-09 10:30:51.266443 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-10-09 10:30:51.266448 | orchestrator | Thursday 09 October 2025 10:30:34 +0000 (0:00:13.669) 0:06:44.782 ****** 2025-10-09 10:30:51.266454 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:30:51.266459 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:30:51.266465 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:30:51.266470 | orchestrator | 2025-10-09 10:30:51.266476 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-10-09 10:30:51.266481 | orchestrator | Thursday 09 October 2025 10:30:35 +0000 (0:00:01.248) 0:06:46.030 ****** 2025-10-09 10:30:51.266490 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:30:51.266496 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:30:51.266501 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:30:51.266507 | orchestrator | 2025-10-09 10:30:51.266515 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-10-09 10:30:51.266521 | orchestrator | Thursday 09 October 2025 10:30:45 +0000 (0:00:09.904) 0:06:55.935 ****** 2025-10-09 10:30:51.266526 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.266532 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.266537 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.266543 | orchestrator | 2025-10-09 10:30:51.266548 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-10-09 10:30:51.266554 | orchestrator | Thursday 09 October 2025 10:30:45 +0000 (0:00:00.391) 0:06:56.327 ****** 2025-10-09 10:30:51.266559 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.266565 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.266570 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.266576 | orchestrator | 2025-10-09 10:30:51.266581 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-10-09 10:30:51.266587 | orchestrator | Thursday 09 October 2025 10:30:46 +0000 (0:00:00.416) 0:06:56.743 ****** 2025-10-09 10:30:51.266592 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.266597 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.266603 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.266608 | orchestrator | 2025-10-09 10:30:51.266614 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-10-09 10:30:51.266619 | orchestrator | Thursday 09 October 2025 10:30:46 +0000 (0:00:00.693) 0:06:57.437 ****** 2025-10-09 10:30:51.266625 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.266630 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.266635 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.266641 | orchestrator | 2025-10-09 10:30:51.266646 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-10-09 10:30:51.266652 | orchestrator | Thursday 09 October 2025 10:30:47 +0000 (0:00:00.414) 0:06:57.852 ****** 2025-10-09 10:30:51.266657 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.266663 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.266668 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.266674 | orchestrator | 2025-10-09 10:30:51.266679 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-10-09 10:30:51.266685 | orchestrator | Thursday 09 October 2025 10:30:47 +0000 (0:00:00.371) 0:06:58.223 ****** 2025-10-09 10:30:51.266690 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:30:51.266695 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:30:51.266701 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:30:51.266706 | orchestrator | 2025-10-09 10:30:51.266712 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-10-09 10:30:51.266717 | orchestrator | Thursday 09 October 2025 10:30:47 +0000 (0:00:00.398) 0:06:58.622 ****** 2025-10-09 10:30:51.266723 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:30:51.266728 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:30:51.266733 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:30:51.266739 | orchestrator | 2025-10-09 10:30:51.266744 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-10-09 10:30:51.266750 | orchestrator | Thursday 09 October 2025 10:30:49 +0000 (0:00:01.370) 0:06:59.993 ****** 2025-10-09 10:30:51.266755 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:30:51.266761 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:30:51.266766 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:30:51.266772 | orchestrator | 2025-10-09 10:30:51.266777 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:30:51.266783 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-10-09 10:30:51.266794 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-10-09 10:30:51.266800 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-10-09 10:30:51.266805 | orchestrator | 2025-10-09 10:30:51.266811 | orchestrator | 2025-10-09 10:30:51.266819 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:30:51.266825 | orchestrator | Thursday 09 October 2025 10:30:50 +0000 (0:00:00.859) 0:07:00.852 ****** 2025-10-09 10:30:51.266830 | orchestrator | =============================================================================== 2025-10-09 10:30:51.266836 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.67s 2025-10-09 10:30:51.266841 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.90s 2025-10-09 10:30:51.266847 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.61s 2025-10-09 10:30:51.266852 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 7.22s 2025-10-09 10:30:51.266858 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.34s 2025-10-09 10:30:51.266863 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.91s 2025-10-09 10:30:51.266868 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.77s 2025-10-09 10:30:51.266874 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.04s 2025-10-09 10:30:51.266879 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 5.00s 2025-10-09 10:30:51.266885 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.85s 2025-10-09 10:30:51.266890 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.63s 2025-10-09 10:30:51.266895 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.53s 2025-10-09 10:30:51.266901 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.50s 2025-10-09 10:30:51.266906 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.48s 2025-10-09 10:30:51.266914 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.46s 2025-10-09 10:30:51.266920 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.40s 2025-10-09 10:30:51.266925 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.32s 2025-10-09 10:30:51.266931 | orchestrator | loadbalancer : Copying over keepalived.conf ----------------------------- 4.29s 2025-10-09 10:30:51.266936 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 3.99s 2025-10-09 10:30:51.266941 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.98s 2025-10-09 10:30:51.266947 | orchestrator | 2025-10-09 10:30:51 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:30:51.266953 | orchestrator | 2025-10-09 10:30:51 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:54.286691 | orchestrator | 2025-10-09 10:30:54 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:30:54.288131 | orchestrator | 2025-10-09 10:30:54 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:30:54.289694 | orchestrator | 2025-10-09 10:30:54 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:30:54.289762 | orchestrator | 2025-10-09 10:30:54 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:30:57.331068 | orchestrator | 2025-10-09 10:30:57 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:30:57.333027 | orchestrator | 2025-10-09 10:30:57 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:30:57.335875 | orchestrator | 2025-10-09 10:30:57 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:30:57.335903 | orchestrator | 2025-10-09 10:30:57 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:00.370491 | orchestrator | 2025-10-09 10:31:00 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:31:00.370591 | orchestrator | 2025-10-09 10:31:00 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:31:00.373441 | orchestrator | 2025-10-09 10:31:00 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:31:00.373465 | orchestrator | 2025-10-09 10:31:00 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:03.424724 | orchestrator | 2025-10-09 10:31:03 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:31:03.425825 | orchestrator | 2025-10-09 10:31:03 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:31:03.427810 | orchestrator | 2025-10-09 10:31:03 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:31:03.427832 | orchestrator | 2025-10-09 10:31:03 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:06.469438 | orchestrator | 2025-10-09 10:31:06 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:31:06.470852 | orchestrator | 2025-10-09 10:31:06 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:31:06.471935 | orchestrator | 2025-10-09 10:31:06 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:31:06.472348 | orchestrator | 2025-10-09 10:31:06 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:09.513778 | orchestrator | 2025-10-09 10:31:09 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:31:09.515757 | orchestrator | 2025-10-09 10:31:09 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:31:09.518367 | orchestrator | 2025-10-09 10:31:09 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:31:09.518880 | orchestrator | 2025-10-09 10:31:09 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:12.552974 | orchestrator | 2025-10-09 10:31:12 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:31:12.554855 | orchestrator | 2025-10-09 10:31:12 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:31:12.555940 | orchestrator | 2025-10-09 10:31:12 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:31:12.556260 | orchestrator | 2025-10-09 10:31:12 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:15.593118 | orchestrator | 2025-10-09 10:31:15 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:31:15.595107 | orchestrator | 2025-10-09 10:31:15 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:31:15.597921 | orchestrator | 2025-10-09 10:31:15 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:31:15.598434 | orchestrator | 2025-10-09 10:31:15 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:18.661142 | orchestrator | 2025-10-09 10:31:18 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:31:18.661359 | orchestrator | 2025-10-09 10:31:18 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:31:18.697365 | orchestrator | 2025-10-09 10:31:18 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:31:18.697430 | orchestrator | 2025-10-09 10:31:18 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:21.743383 | orchestrator | 2025-10-09 10:31:21 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:31:21.745628 | orchestrator | 2025-10-09 10:31:21 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:31:21.747768 | orchestrator | 2025-10-09 10:31:21 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:31:21.751529 | orchestrator | 2025-10-09 10:31:21 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:24.807843 | orchestrator | 2025-10-09 10:31:24 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:31:24.808863 | orchestrator | 2025-10-09 10:31:24 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:31:24.811441 | orchestrator | 2025-10-09 10:31:24 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:31:24.811662 | orchestrator | 2025-10-09 10:31:24 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:27.860370 | orchestrator | 2025-10-09 10:31:27 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:31:27.861049 | orchestrator | 2025-10-09 10:31:27 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:31:27.862451 | orchestrator | 2025-10-09 10:31:27 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:31:27.862479 | orchestrator | 2025-10-09 10:31:27 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:30.905757 | orchestrator | 2025-10-09 10:31:30 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:31:30.908726 | orchestrator | 2025-10-09 10:31:30 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:31:30.909255 | orchestrator | 2025-10-09 10:31:30 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:31:30.909279 | orchestrator | 2025-10-09 10:31:30 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:33.963679 | orchestrator | 2025-10-09 10:31:33 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:31:33.963875 | orchestrator | 2025-10-09 10:31:33 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:31:33.963908 | orchestrator | 2025-10-09 10:31:33 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:31:33.963921 | orchestrator | 2025-10-09 10:31:33 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:37.010105 | orchestrator | 2025-10-09 10:31:37 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:31:37.011658 | orchestrator | 2025-10-09 10:31:37 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:31:37.013914 | orchestrator | 2025-10-09 10:31:37 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:31:37.013949 | orchestrator | 2025-10-09 10:31:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:40.061080 | orchestrator | 2025-10-09 10:31:40 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:31:40.062798 | orchestrator | 2025-10-09 10:31:40 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:31:40.065140 | orchestrator | 2025-10-09 10:31:40 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:31:40.065466 | orchestrator | 2025-10-09 10:31:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:43.111076 | orchestrator | 2025-10-09 10:31:43 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:31:43.113345 | orchestrator | 2025-10-09 10:31:43 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:31:43.115425 | orchestrator | 2025-10-09 10:31:43 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:31:43.115745 | orchestrator | 2025-10-09 10:31:43 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:46.154796 | orchestrator | 2025-10-09 10:31:46 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:31:46.155811 | orchestrator | 2025-10-09 10:31:46 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:31:46.156855 | orchestrator | 2025-10-09 10:31:46 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:31:46.157580 | orchestrator | 2025-10-09 10:31:46 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:49.200513 | orchestrator | 2025-10-09 10:31:49 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:31:49.203297 | orchestrator | 2025-10-09 10:31:49 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:31:49.205852 | orchestrator | 2025-10-09 10:31:49 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:31:49.205880 | orchestrator | 2025-10-09 10:31:49 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:52.261175 | orchestrator | 2025-10-09 10:31:52 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:31:52.262948 | orchestrator | 2025-10-09 10:31:52 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:31:52.264763 | orchestrator | 2025-10-09 10:31:52 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:31:52.264984 | orchestrator | 2025-10-09 10:31:52 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:55.314282 | orchestrator | 2025-10-09 10:31:55 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:31:55.314954 | orchestrator | 2025-10-09 10:31:55 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:31:55.317204 | orchestrator | 2025-10-09 10:31:55 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:31:55.317265 | orchestrator | 2025-10-09 10:31:55 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:31:58.369060 | orchestrator | 2025-10-09 10:31:58 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:31:58.370167 | orchestrator | 2025-10-09 10:31:58 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:31:58.371665 | orchestrator | 2025-10-09 10:31:58 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:31:58.371768 | orchestrator | 2025-10-09 10:31:58 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:01.438307 | orchestrator | 2025-10-09 10:32:01 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:32:01.439987 | orchestrator | 2025-10-09 10:32:01 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:32:01.442337 | orchestrator | 2025-10-09 10:32:01 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:32:01.442461 | orchestrator | 2025-10-09 10:32:01 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:04.491610 | orchestrator | 2025-10-09 10:32:04 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:32:04.494181 | orchestrator | 2025-10-09 10:32:04 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:32:04.495979 | orchestrator | 2025-10-09 10:32:04 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:32:04.496005 | orchestrator | 2025-10-09 10:32:04 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:07.553379 | orchestrator | 2025-10-09 10:32:07 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:32:07.555046 | orchestrator | 2025-10-09 10:32:07 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:32:07.557152 | orchestrator | 2025-10-09 10:32:07 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:32:07.557295 | orchestrator | 2025-10-09 10:32:07 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:10.604105 | orchestrator | 2025-10-09 10:32:10 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:32:10.605821 | orchestrator | 2025-10-09 10:32:10 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:32:10.606873 | orchestrator | 2025-10-09 10:32:10 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:32:10.607082 | orchestrator | 2025-10-09 10:32:10 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:13.664628 | orchestrator | 2025-10-09 10:32:13 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:32:13.666887 | orchestrator | 2025-10-09 10:32:13 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:32:13.669337 | orchestrator | 2025-10-09 10:32:13 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:32:13.669361 | orchestrator | 2025-10-09 10:32:13 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:16.724799 | orchestrator | 2025-10-09 10:32:16 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:32:16.727817 | orchestrator | 2025-10-09 10:32:16 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:32:16.729639 | orchestrator | 2025-10-09 10:32:16 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:32:16.729665 | orchestrator | 2025-10-09 10:32:16 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:19.786579 | orchestrator | 2025-10-09 10:32:19 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:32:19.788447 | orchestrator | 2025-10-09 10:32:19 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:32:19.790124 | orchestrator | 2025-10-09 10:32:19 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:32:19.790258 | orchestrator | 2025-10-09 10:32:19 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:22.827146 | orchestrator | 2025-10-09 10:32:22 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:32:22.827619 | orchestrator | 2025-10-09 10:32:22 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:32:22.829280 | orchestrator | 2025-10-09 10:32:22 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:32:22.829362 | orchestrator | 2025-10-09 10:32:22 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:25.891769 | orchestrator | 2025-10-09 10:32:25 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:32:25.893092 | orchestrator | 2025-10-09 10:32:25 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:32:25.894579 | orchestrator | 2025-10-09 10:32:25 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:32:25.894605 | orchestrator | 2025-10-09 10:32:25 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:28.940444 | orchestrator | 2025-10-09 10:32:28 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:32:28.941654 | orchestrator | 2025-10-09 10:32:28 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:32:28.944375 | orchestrator | 2025-10-09 10:32:28 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:32:28.944400 | orchestrator | 2025-10-09 10:32:28 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:31.990921 | orchestrator | 2025-10-09 10:32:31 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:32:31.994501 | orchestrator | 2025-10-09 10:32:31 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:32:31.995606 | orchestrator | 2025-10-09 10:32:31 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:32:31.995628 | orchestrator | 2025-10-09 10:32:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:35.048927 | orchestrator | 2025-10-09 10:32:35 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:32:35.050362 | orchestrator | 2025-10-09 10:32:35 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:32:35.052087 | orchestrator | 2025-10-09 10:32:35 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:32:35.052178 | orchestrator | 2025-10-09 10:32:35 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:38.105315 | orchestrator | 2025-10-09 10:32:38 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:32:38.107353 | orchestrator | 2025-10-09 10:32:38 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:32:38.109060 | orchestrator | 2025-10-09 10:32:38 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:32:38.109089 | orchestrator | 2025-10-09 10:32:38 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:41.168377 | orchestrator | 2025-10-09 10:32:41 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:32:41.169947 | orchestrator | 2025-10-09 10:32:41 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:32:41.172034 | orchestrator | 2025-10-09 10:32:41 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:32:41.172069 | orchestrator | 2025-10-09 10:32:41 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:44.220780 | orchestrator | 2025-10-09 10:32:44 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:32:44.221931 | orchestrator | 2025-10-09 10:32:44 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:32:44.223778 | orchestrator | 2025-10-09 10:32:44 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:32:44.223825 | orchestrator | 2025-10-09 10:32:44 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:47.274294 | orchestrator | 2025-10-09 10:32:47 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:32:47.278006 | orchestrator | 2025-10-09 10:32:47 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:32:47.280841 | orchestrator | 2025-10-09 10:32:47 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:32:47.280864 | orchestrator | 2025-10-09 10:32:47 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:50.334405 | orchestrator | 2025-10-09 10:32:50 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:32:50.334503 | orchestrator | 2025-10-09 10:32:50 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:32:50.334587 | orchestrator | 2025-10-09 10:32:50 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state STARTED 2025-10-09 10:32:50.334603 | orchestrator | 2025-10-09 10:32:50 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:53.381031 | orchestrator | 2025-10-09 10:32:53 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:32:53.382196 | orchestrator | 2025-10-09 10:32:53 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:32:53.384383 | orchestrator | 2025-10-09 10:32:53 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:32:53.391266 | orchestrator | 2025-10-09 10:32:53 | INFO  | Task 6aa86fd6-7702-423c-9e8a-867159ce6aac is in state SUCCESS 2025-10-09 10:32:53.393973 | orchestrator | 2025-10-09 10:32:53.394006 | orchestrator | 2025-10-09 10:32:53.394079 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-10-09 10:32:53.394093 | orchestrator | 2025-10-09 10:32:53.394105 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-10-09 10:32:53.394117 | orchestrator | Thursday 09 October 2025 10:21:10 +0000 (0:00:00.969) 0:00:00.969 ****** 2025-10-09 10:32:53.394130 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.394144 | orchestrator | 2025-10-09 10:32:53.394155 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-10-09 10:32:53.394167 | orchestrator | Thursday 09 October 2025 10:21:11 +0000 (0:00:01.284) 0:00:02.254 ****** 2025-10-09 10:32:53.394178 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.394191 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.394203 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.394272 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.394284 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.394295 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.394306 | orchestrator | 2025-10-09 10:32:53.394318 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-10-09 10:32:53.394331 | orchestrator | Thursday 09 October 2025 10:21:13 +0000 (0:00:01.932) 0:00:04.186 ****** 2025-10-09 10:32:53.394342 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.394354 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.394365 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.394377 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.394388 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.394399 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.394411 | orchestrator | 2025-10-09 10:32:53.394422 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-10-09 10:32:53.394434 | orchestrator | Thursday 09 October 2025 10:21:14 +0000 (0:00:00.830) 0:00:05.017 ****** 2025-10-09 10:32:53.394445 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.394457 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.394468 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.394480 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.394491 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.394503 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.394514 | orchestrator | 2025-10-09 10:32:53.394543 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-10-09 10:32:53.394583 | orchestrator | Thursday 09 October 2025 10:21:15 +0000 (0:00:00.911) 0:00:05.928 ****** 2025-10-09 10:32:53.394719 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.394734 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.394746 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.394759 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.394771 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.394783 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.394795 | orchestrator | 2025-10-09 10:32:53.394807 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-10-09 10:32:53.394820 | orchestrator | Thursday 09 October 2025 10:21:15 +0000 (0:00:00.769) 0:00:06.698 ****** 2025-10-09 10:32:53.394832 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.394845 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.394857 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.394869 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.394881 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.394893 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.394905 | orchestrator | 2025-10-09 10:32:53.394918 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-10-09 10:32:53.394931 | orchestrator | Thursday 09 October 2025 10:21:16 +0000 (0:00:00.705) 0:00:07.403 ****** 2025-10-09 10:32:53.394942 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.394953 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.394963 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.394974 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.394985 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.394996 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.395007 | orchestrator | 2025-10-09 10:32:53.395018 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-10-09 10:32:53.395029 | orchestrator | Thursday 09 October 2025 10:21:17 +0000 (0:00:00.931) 0:00:08.335 ****** 2025-10-09 10:32:53.395040 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.395052 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.395063 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.395074 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.395085 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.395095 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.395107 | orchestrator | 2025-10-09 10:32:53.395118 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-10-09 10:32:53.395129 | orchestrator | Thursday 09 October 2025 10:21:18 +0000 (0:00:00.885) 0:00:09.220 ****** 2025-10-09 10:32:53.395140 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.395151 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.395162 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.395173 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.395183 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.395194 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.395205 | orchestrator | 2025-10-09 10:32:53.395233 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-10-09 10:32:53.395245 | orchestrator | Thursday 09 October 2025 10:21:19 +0000 (0:00:01.148) 0:00:10.369 ****** 2025-10-09 10:32:53.395256 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-09 10:32:53.395267 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-09 10:32:53.395278 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-09 10:32:53.395289 | orchestrator | 2025-10-09 10:32:53.395300 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-10-09 10:32:53.395340 | orchestrator | Thursday 09 October 2025 10:21:20 +0000 (0:00:01.114) 0:00:11.484 ****** 2025-10-09 10:32:53.395352 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.395420 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.395431 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.395442 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.395464 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.395475 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.395486 | orchestrator | 2025-10-09 10:32:53.395512 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-10-09 10:32:53.395524 | orchestrator | Thursday 09 October 2025 10:21:21 +0000 (0:00:01.180) 0:00:12.665 ****** 2025-10-09 10:32:53.395535 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-09 10:32:53.395547 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-09 10:32:53.395558 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-09 10:32:53.395569 | orchestrator | 2025-10-09 10:32:53.395657 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-10-09 10:32:53.395670 | orchestrator | Thursday 09 October 2025 10:21:25 +0000 (0:00:03.521) 0:00:16.186 ****** 2025-10-09 10:32:53.395681 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-10-09 10:32:53.395692 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-10-09 10:32:53.395704 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-10-09 10:32:53.395715 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.395726 | orchestrator | 2025-10-09 10:32:53.395737 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-10-09 10:32:53.395748 | orchestrator | Thursday 09 October 2025 10:21:26 +0000 (0:00:01.397) 0:00:17.583 ****** 2025-10-09 10:32:53.395761 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.395775 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.395794 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.395806 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.395818 | orchestrator | 2025-10-09 10:32:53.395829 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-10-09 10:32:53.395840 | orchestrator | Thursday 09 October 2025 10:21:28 +0000 (0:00:01.506) 0:00:19.090 ****** 2025-10-09 10:32:53.395854 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.395868 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.395880 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.395891 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.395910 | orchestrator | 2025-10-09 10:32:53.395921 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-10-09 10:32:53.395932 | orchestrator | Thursday 09 October 2025 10:21:28 +0000 (0:00:00.531) 0:00:19.621 ****** 2025-10-09 10:32:53.395946 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-10-09 10:21:23.126257', 'end': '2025-10-09 10:21:23.409693', 'delta': '0:00:00.283436', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.395968 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-10-09 10:21:24.013755', 'end': '2025-10-09 10:21:24.293791', 'delta': '0:00:00.280036', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.395981 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-10-09 10:21:24.861589', 'end': '2025-10-09 10:21:25.132813', 'delta': '0:00:00.271224', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.395993 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.396004 | orchestrator | 2025-10-09 10:32:53.396020 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-10-09 10:32:53.396032 | orchestrator | Thursday 09 October 2025 10:21:29 +0000 (0:00:00.549) 0:00:20.170 ****** 2025-10-09 10:32:53.396043 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.396055 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.396066 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.396077 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.396088 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.396099 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.396110 | orchestrator | 2025-10-09 10:32:53.396121 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-10-09 10:32:53.396132 | orchestrator | Thursday 09 October 2025 10:21:32 +0000 (0:00:03.018) 0:00:23.189 ****** 2025-10-09 10:32:53.396143 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.396154 | orchestrator | 2025-10-09 10:32:53.396166 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-10-09 10:32:53.396177 | orchestrator | Thursday 09 October 2025 10:21:33 +0000 (0:00:00.795) 0:00:23.984 ****** 2025-10-09 10:32:53.396188 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.396199 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.396228 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.396239 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.396257 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.396268 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.396279 | orchestrator | 2025-10-09 10:32:53.396290 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-10-09 10:32:53.396301 | orchestrator | Thursday 09 October 2025 10:21:35 +0000 (0:00:02.292) 0:00:26.276 ****** 2025-10-09 10:32:53.396312 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.396323 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.396334 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.396344 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.396355 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.396366 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.396377 | orchestrator | 2025-10-09 10:32:53.396388 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-10-09 10:32:53.396399 | orchestrator | Thursday 09 October 2025 10:21:38 +0000 (0:00:02.502) 0:00:28.779 ****** 2025-10-09 10:32:53.396409 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.396420 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.396431 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.396442 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.396452 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.396464 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.396482 | orchestrator | 2025-10-09 10:32:53.396500 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-10-09 10:32:53.396519 | orchestrator | Thursday 09 October 2025 10:21:39 +0000 (0:00:01.860) 0:00:30.640 ****** 2025-10-09 10:32:53.396537 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.396554 | orchestrator | 2025-10-09 10:32:53.396572 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-10-09 10:32:53.396589 | orchestrator | Thursday 09 October 2025 10:21:40 +0000 (0:00:00.234) 0:00:30.874 ****** 2025-10-09 10:32:53.396606 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.396796 | orchestrator | 2025-10-09 10:32:53.396816 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-10-09 10:32:53.396828 | orchestrator | Thursday 09 October 2025 10:21:40 +0000 (0:00:00.267) 0:00:31.142 ****** 2025-10-09 10:32:53.396839 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.396849 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.396860 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.396871 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.396882 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.396893 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.396904 | orchestrator | 2025-10-09 10:32:53.396915 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-10-09 10:32:53.396935 | orchestrator | Thursday 09 October 2025 10:21:41 +0000 (0:00:00.874) 0:00:32.016 ****** 2025-10-09 10:32:53.396947 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.396958 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.396968 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.396979 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.396990 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.397001 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.397011 | orchestrator | 2025-10-09 10:32:53.397022 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-10-09 10:32:53.397033 | orchestrator | Thursday 09 October 2025 10:21:42 +0000 (0:00:01.346) 0:00:33.363 ****** 2025-10-09 10:32:53.397044 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.397055 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.397065 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.397076 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.397087 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.397097 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.397108 | orchestrator | 2025-10-09 10:32:53.397119 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-10-09 10:32:53.397140 | orchestrator | Thursday 09 October 2025 10:21:43 +0000 (0:00:01.102) 0:00:34.465 ****** 2025-10-09 10:32:53.397151 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.397162 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.397172 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.397183 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.397193 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.397204 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.397277 | orchestrator | 2025-10-09 10:32:53.397289 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-10-09 10:32:53.397300 | orchestrator | Thursday 09 October 2025 10:21:45 +0000 (0:00:01.519) 0:00:35.985 ****** 2025-10-09 10:32:53.397310 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.397321 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.397332 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.397343 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.397353 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.397364 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.397375 | orchestrator | 2025-10-09 10:32:53.397393 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-10-09 10:32:53.397404 | orchestrator | Thursday 09 October 2025 10:21:45 +0000 (0:00:00.652) 0:00:36.637 ****** 2025-10-09 10:32:53.397467 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.397479 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.397490 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.397501 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.397512 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.397523 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.397534 | orchestrator | 2025-10-09 10:32:53.397545 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-10-09 10:32:53.397556 | orchestrator | Thursday 09 October 2025 10:21:46 +0000 (0:00:00.849) 0:00:37.486 ****** 2025-10-09 10:32:53.397567 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.397578 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.397589 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.397600 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.397610 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.397621 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.397632 | orchestrator | 2025-10-09 10:32:53.397643 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-10-09 10:32:53.397717 | orchestrator | Thursday 09 October 2025 10:21:47 +0000 (0:00:00.855) 0:00:38.342 ****** 2025-10-09 10:32:53.397731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.397744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.397755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.397775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.397822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.397835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.397847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.397864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.397876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.397887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.397898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.397909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.397929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.397947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.397959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.397971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.397990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08a584d3-4193-4c26-9ad8-9a2035627c92', 'scsi-SQEMU_QEMU_HARDDISK_08a584d3-4193-4c26-9ad8-9a2035627c92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08a584d3-4193-4c26-9ad8-9a2035627c92-part1', 'scsi-SQEMU_QEMU_HARDDISK_08a584d3-4193-4c26-9ad8-9a2035627c92-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08a584d3-4193-4c26-9ad8-9a2035627c92-part14', 'scsi-SQEMU_QEMU_HARDDISK_08a584d3-4193-4c26-9ad8-9a2035627c92-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08a584d3-4193-4c26-9ad8-9a2035627c92-part15', 'scsi-SQEMU_QEMU_HARDDISK_08a584d3-4193-4c26-9ad8-9a2035627c92-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08a584d3-4193-4c26-9ad8-9a2035627c92-part16', 'scsi-SQEMU_QEMU_HARDDISK_08a584d3-4193-4c26-9ad8-9a2035627c92-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:32:53.398007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:32:53.398090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1155372d-89ce-41bb-8625-403a9b86a02b', 'scsi-SQEMU_QEMU_HARDDISK_1155372d-89ce-41bb-8625-403a9b86a02b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1155372d-89ce-41bb-8625-403a9b86a02b-part1', 'scsi-SQEMU_QEMU_HARDDISK_1155372d-89ce-41bb-8625-403a9b86a02b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1155372d-89ce-41bb-8625-403a9b86a02b-part14', 'scsi-SQEMU_QEMU_HARDDISK_1155372d-89ce-41bb-8625-403a9b86a02b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1155372d-89ce-41bb-8625-403a9b86a02b-part15', 'scsi-SQEMU_QEMU_HARDDISK_1155372d-89ce-41bb-8625-403a9b86a02b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1155372d-89ce-41bb-8625-403a9b86a02b-part16', 'scsi-SQEMU_QEMU_HARDDISK_1155372d-89ce-41bb-8625-403a9b86a02b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:32:53.398107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:32:53.398120 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.398132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398143 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.398155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d64f8835-69ed-47b8-9bfe-3e1c6198249d', 'scsi-SQEMU_QEMU_HARDDISK_d64f8835-69ed-47b8-9bfe-3e1c6198249d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d64f8835-69ed-47b8-9bfe-3e1c6198249d-part1', 'scsi-SQEMU_QEMU_HARDDISK_d64f8835-69ed-47b8-9bfe-3e1c6198249d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d64f8835-69ed-47b8-9bfe-3e1c6198249d-part14', 'scsi-SQEMU_QEMU_HARDDISK_d64f8835-69ed-47b8-9bfe-3e1c6198249d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d64f8835-69ed-47b8-9bfe-3e1c6198249d-part15', 'scsi-SQEMU_QEMU_HARDDISK_d64f8835-69ed-47b8-9bfe-3e1c6198249d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d64f8835-69ed-47b8-9bfe-3e1c6198249d-part16', 'scsi-SQEMU_QEMU_HARDDISK_d64f8835-69ed-47b8-9bfe-3e1c6198249d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:32:53.398587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:32:53.398647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0cbdaba5--e3a8--55ff--9207--33249002ea74-osd--block--0cbdaba5--e3a8--55ff--9207--33249002ea74', 'dm-uuid-LVM-ExvMc93TaGMjWOqGPvd34m2gk1t4oCUOJ6FpMp03P2VtxgLz5RAEn3Dnels013gF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b8397ec--b473--5fab--a988--270c3fd4ebb0-osd--block--0b8397ec--b473--5fab--a988--270c3fd4ebb0', 'dm-uuid-LVM-Iu2PpFdOBa8teqvWKcbfD2Pd2CSRQtEGYNoKuoXUgYBbYH60uaRabo8PgEzus6ML'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398695 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843', 'scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part1', 'scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part14', 'scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part15', 'scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part16', 'scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:32:53.398741 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0cbdaba5--e3a8--55ff--9207--33249002ea74-osd--block--0cbdaba5--e3a8--55ff--9207--33249002ea74'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uLdo5O-ec3P-ApYI-bZen-ZZ3F-BeNc-Ki292o', 'scsi-0QEMU_QEMU_HARDDISK_919b2ed4-de3e-4423-bde9-ac7f73558c8d', 'scsi-SQEMU_QEMU_HARDDISK_919b2ed4-de3e-4423-bde9-ac7f73558c8d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:32:53.398751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0b8397ec--b473--5fab--a988--270c3fd4ebb0-osd--block--0b8397ec--b473--5fab--a988--270c3fd4ebb0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lRzkGE-YzPM-XiG6-y68a-feZr-FiG0-MdFMqH', 'scsi-0QEMU_QEMU_HARDDISK_ea7d1eca-dc5e-463e-aff8-492469dc7c84', 'scsi-SQEMU_QEMU_HARDDISK_ea7d1eca-dc5e-463e-aff8-492469dc7c84'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:32:53.398755 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.398760 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2df43997-ce38-41a3-953f-7189c0799c6e', 'scsi-SQEMU_QEMU_HARDDISK_2df43997-ce38-41a3-953f-7189c0799c6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:32:53.398765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:32:53.398772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bec6f5a4--3c2e--53c4--9bd6--39a84a6eb9ee-osd--block--bec6f5a4--3c2e--53c4--9bd6--39a84a6eb9ee', 'dm-uuid-LVM-au3ljzSANb0tyOeMGUgRFh2fQv14LQXqLqkTr72pBhnrUNTZZIjkiDu5w36Kbbq4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--db411f8a--05b0--54f7--b748--fd517a3c676f-osd--block--db411f8a--05b0--54f7--b748--fd517a3c676f', 'dm-uuid-LVM-jpDQBe8QHm1K0O9IsCStmbdH56NsHtzn6CQJMN93ZeGkA72L0LSc1QtKsLj0mgLC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398784 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398808 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6', 'scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part1', 'scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part14', 'scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part15', 'scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part16', 'scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:32:53.398833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bec6f5a4--3c2e--53c4--9bd6--39a84a6eb9ee-osd--block--bec6f5a4--3c2e--53c4--9bd6--39a84a6eb9ee'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-K4lFF5-75vb-ZsLJ-bPXw-JnwN-3ljd-cE9Yz9', 'scsi-0QEMU_QEMU_HARDDISK_9e7febf8-8ec8-4679-b2bb-f3ad59f2c20d', 'scsi-SQEMU_QEMU_HARDDISK_9e7febf8-8ec8-4679-b2bb-f3ad59f2c20d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:32:53.398840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--db411f8a--05b0--54f7--b748--fd517a3c676f-osd--block--db411f8a--05b0--54f7--b748--fd517a3c676f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FC0Iqg-a508-XVdl-dcs1-Lm9g-TMVF-jFQuVZ', 'scsi-0QEMU_QEMU_HARDDISK_fd778c69-d4e8-41af-bc93-131a1dca1168', 'scsi-SQEMU_QEMU_HARDDISK_fd778c69-d4e8-41af-bc93-131a1dca1168'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:32:53.398844 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.398848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96a31b72-79c3-475c-a7fa-14d6a4c6c9b3', 'scsi-SQEMU_QEMU_HARDDISK_96a31b72-79c3-475c-a7fa-14d6a4c6c9b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:32:53.398856 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:32:53.398860 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.398864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--83d577c9--ff1a--5f1d--bd0e--44f99d742f78-osd--block--83d577c9--ff1a--5f1d--bd0e--44f99d742f78', 'dm-uuid-LVM-gxlCTZ5efJTHi74imUaaLMcOdZC9sz722geQT9GSu5DHYJXvaxEnu8fZKsMeh9uX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ce20a60--fba3--5536--8b48--1e48c039a9b4-osd--block--8ce20a60--fba3--5536--8b48--1e48c039a9b4', 'dm-uuid-LVM-2wowwvZuu9v58jhoRFdOjaRASbwQw8Dt4MH44vTkY6o4LKxALHPQgNRz4cfwq14j'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398875 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398899 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:32:53.398934 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0', 'scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part1', 'scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part14', 'scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part15', 'scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part16', 'scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:32:53.398942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--83d577c9--ff1a--5f1d--bd0e--44f99d742f78-osd--block--83d577c9--ff1a--5f1d--bd0e--44f99d742f78'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FroQeu-Fzqs-f6jq-MAe8-Csas-tQEp-PNWjna', 'scsi-0QEMU_QEMU_HARDDISK_6ad7b454-0b43-4b47-a404-c2fa6c30a397', 'scsi-SQEMU_QEMU_HARDDISK_6ad7b454-0b43-4b47-a404-c2fa6c30a397'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:32:53.398950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8ce20a60--fba3--5536--8b48--1e48c039a9b4-osd--block--8ce20a60--fba3--5536--8b48--1e48c039a9b4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EkQEBW-S5T8-FI78-gRGq-CMjx-7hxO-72EIVH', 'scsi-0QEMU_QEMU_HARDDISK_46e0cf8b-6c4d-4615-bce2-a8b81f113425', 'scsi-SQEMU_QEMU_HARDDISK_46e0cf8b-6c4d-4615-bce2-a8b81f113425'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:32:53.398954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_94b6a137-07a9-47a7-90bd-af13afc1319f', 'scsi-SQEMU_QEMU_HARDDISK_94b6a137-07a9-47a7-90bd-af13afc1319f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:32:53.398958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:32:53.398964 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.398968 | orchestrator | 2025-10-09 10:32:53.398973 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-10-09 10:32:53.398977 | orchestrator | Thursday 09 October 2025 10:21:50 +0000 (0:00:02.459) 0:00:40.802 ****** 2025-10-09 10:32:53.398982 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.398987 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.398999 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399004 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399008 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399011 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399019 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399023 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399033 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1155372d-89ce-41bb-8625-403a9b86a02b', 'scsi-SQEMU_QEMU_HARDDISK_1155372d-89ce-41bb-8625-403a9b86a02b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1155372d-89ce-41bb-8625-403a9b86a02b-part1', 'scsi-SQEMU_QEMU_HARDDISK_1155372d-89ce-41bb-8625-403a9b86a02b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1155372d-89ce-41bb-8625-403a9b86a02b-part14', 'scsi-SQEMU_QEMU_HARDDISK_1155372d-89ce-41bb-8625-403a9b86a02b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1155372d-89ce-41bb-8625-403a9b86a02b-part15', 'scsi-SQEMU_QEMU_HARDDISK_1155372d-89ce-41bb-8625-403a9b86a02b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1155372d-89ce-41bb-8625-403a9b86a02b-part16', 'scsi-SQEMU_QEMU_HARDDISK_1155372d-89ce-41bb-8625-403a9b86a02b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399039 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399044 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399048 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399058 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399062 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399066 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399070 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399075 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399080 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399090 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08a584d3-4193-4c26-9ad8-9a2035627c92', 'scsi-SQEMU_QEMU_HARDDISK_08a584d3-4193-4c26-9ad8-9a2035627c92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08a584d3-4193-4c26-9ad8-9a2035627c92-part1', 'scsi-SQEMU_QEMU_HARDDISK_08a584d3-4193-4c26-9ad8-9a2035627c92-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08a584d3-4193-4c26-9ad8-9a2035627c92-part14', 'scsi-SQEMU_QEMU_HARDDISK_08a584d3-4193-4c26-9ad8-9a2035627c92-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08a584d3-4193-4c26-9ad8-9a2035627c92-part15', 'scsi-SQEMU_QEMU_HARDDISK_08a584d3-4193-4c26-9ad8-9a2035627c92-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08a584d3-4193-4c26-9ad8-9a2035627c92-part16', 'scsi-SQEMU_QEMU_HARDDISK_08a584d3-4193-4c26-9ad8-9a2035627c92-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399094 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.399098 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399105 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399109 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399118 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399122 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.399126 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399130 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399134 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399140 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399144 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399158 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d64f8835-69ed-47b8-9bfe-3e1c6198249d', 'scsi-SQEMU_QEMU_HARDDISK_d64f8835-69ed-47b8-9bfe-3e1c6198249d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d64f8835-69ed-47b8-9bfe-3e1c6198249d-part1', 'scsi-SQEMU_QEMU_HARDDISK_d64f8835-69ed-47b8-9bfe-3e1c6198249d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d64f8835-69ed-47b8-9bfe-3e1c6198249d-part14', 'scsi-SQEMU_QEMU_HARDDISK_d64f8835-69ed-47b8-9bfe-3e1c6198249d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d64f8835-69ed-47b8-9bfe-3e1c6198249d-part15', 'scsi-SQEMU_QEMU_HARDDISK_d64f8835-69ed-47b8-9bfe-3e1c6198249d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d64f8835-69ed-47b8-9bfe-3e1c6198249d-part16', 'scsi-SQEMU_QEMU_HARDDISK_d64f8835-69ed-47b8-9bfe-3e1c6198249d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399163 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399171 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0cbdaba5--e3a8--55ff--9207--33249002ea74-osd--block--0cbdaba5--e3a8--55ff--9207--33249002ea74', 'dm-uuid-LVM-ExvMc93TaGMjWOqGPvd34m2gk1t4oCUOJ6FpMp03P2VtxgLz5RAEn3Dnels013gF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399179 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b8397ec--b473--5fab--a988--270c3fd4ebb0-osd--block--0b8397ec--b473--5fab--a988--270c3fd4ebb0', 'dm-uuid-LVM-Iu2PpFdOBa8teqvWKcbfD2Pd2CSRQtEGYNoKuoXUgYBbYH60uaRabo8PgEzus6ML'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399186 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399191 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399195 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bec6f5a4--3c2e--53c4--9bd6--39a84a6eb9ee-osd--block--bec6f5a4--3c2e--53c4--9bd6--39a84a6eb9ee', 'dm-uuid-LVM-au3ljzSANb0tyOeMGUgRFh2fQv14LQXqLqkTr72pBhnrUNTZZIjkiDu5w36Kbbq4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399202 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--db411f8a--05b0--54f7--b748--fd517a3c676f-osd--block--db411f8a--05b0--54f7--b748--fd517a3c676f', 'dm-uuid-LVM-jpDQBe8QHm1K0O9IsCStmbdH56NsHtzn6CQJMN93ZeGkA72L0LSc1QtKsLj0mgLC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399222 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399230 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399234 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.399243 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399247 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399252 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399256 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399263 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399271 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399275 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399282 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399287 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399292 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399752 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6', 'scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part1', 'scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part14', 'scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part15', 'scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part16', 'scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399777 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399782 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399791 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843', 'scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part1', 'scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part14', 'scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part15', 'scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part16', 'scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399800 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0cbdaba5--e3a8--55ff--9207--33249002ea74-osd--block--0cbdaba5--e3a8--55ff--9207--33249002ea74'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uLdo5O-ec3P-ApYI-bZen-ZZ3F-BeNc-Ki292o', 'scsi-0QEMU_QEMU_HARDDISK_919b2ed4-de3e-4423-bde9-ac7f73558c8d', 'scsi-SQEMU_QEMU_HARDDISK_919b2ed4-de3e-4423-bde9-ac7f73558c8d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399805 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0b8397ec--b473--5fab--a988--270c3fd4ebb0-osd--block--0b8397ec--b473--5fab--a988--270c3fd4ebb0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lRzkGE-YzPM-XiG6-y68a-feZr-FiG0-MdFMqH', 'scsi-0QEMU_QEMU_HARDDISK_ea7d1eca-dc5e-463e-aff8-492469dc7c84', 'scsi-SQEMU_QEMU_HARDDISK_ea7d1eca-dc5e-463e-aff8-492469dc7c84'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399810 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bec6f5a4--3c2e--53c4--9bd6--39a84a6eb9ee-osd--block--bec6f5a4--3c2e--53c4--9bd6--39a84a6eb9ee'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-K4lFF5-75vb-ZsLJ-bPXw-JnwN-3ljd-cE9Yz9', 'scsi-0QEMU_QEMU_HARDDISK_9e7febf8-8ec8-4679-b2bb-f3ad59f2c20d', 'scsi-SQEMU_QEMU_HARDDISK_9e7febf8-8ec8-4679-b2bb-f3ad59f2c20d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399820 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2df43997-ce38-41a3-953f-7189c0799c6e', 'scsi-SQEMU_QEMU_HARDDISK_2df43997-ce38-41a3-953f-7189c0799c6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399824 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399829 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.399836 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--83d577c9--ff1a--5f1d--bd0e--44f99d742f78-osd--block--83d577c9--ff1a--5f1d--bd0e--44f99d742f78', 'dm-uuid-LVM-gxlCTZ5efJTHi74imUaaLMcOdZC9sz722geQT9GSu5DHYJXvaxEnu8fZKsMeh9uX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399840 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ce20a60--fba3--5536--8b48--1e48c039a9b4-osd--block--8ce20a60--fba3--5536--8b48--1e48c039a9b4', 'dm-uuid-LVM-2wowwvZuu9v58jhoRFdOjaRASbwQw8Dt4MH44vTkY6o4LKxALHPQgNRz4cfwq14j'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399844 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399850 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399857 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--db411f8a--05b0--54f7--b748--fd517a3c676f-osd--block--db411f8a--05b0--54f7--b748--fd517a3c676f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FC0Iqg-a508-XVdl-dcs1-Lm9g-TMVF-jFQuVZ', 'scsi-0QEMU_QEMU_HARDDISK_fd778c69-d4e8-41af-bc93-131a1dca1168', 'scsi-SQEMU_QEMU_HARDDISK_fd778c69-d4e8-41af-bc93-131a1dca1168'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399863 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399867 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399871 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399875 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399882 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399888 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399894 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0', 'scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part1', 'scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part14', 'scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part15', 'scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part16', 'scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399899 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--83d577c9--ff1a--5f1d--bd0e--44f99d742f78-osd--block--83d577c9--ff1a--5f1d--bd0e--44f99d742f78'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FroQeu-Fzqs-f6jq-MAe8-Csas-tQEp-PNWjna', 'scsi-0QEMU_QEMU_HARDDISK_6ad7b454-0b43-4b47-a404-c2fa6c30a397', 'scsi-SQEMU_QEMU_HARDDISK_6ad7b454-0b43-4b47-a404-c2fa6c30a397'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399908 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96a31b72-79c3-475c-a7fa-14d6a4c6c9b3', 'scsi-SQEMU_QEMU_HARDDISK_96a31b72-79c3-475c-a7fa-14d6a4c6c9b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399912 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8ce20a60--fba3--5536--8b48--1e48c039a9b4-osd--block--8ce20a60--fba3--5536--8b48--1e48c039a9b4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EkQEBW-S5T8-FI78-gRGq-CMjx-7hxO-72EIVH', 'scsi-0QEMU_QEMU_HARDDISK_46e0cf8b-6c4d-4615-bce2-a8b81f113425', 'scsi-SQEMU_QEMU_HARDDISK_46e0cf8b-6c4d-4615-bce2-a8b81f113425'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399918 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399922 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_94b6a137-07a9-47a7-90bd-af13afc1319f', 'scsi-SQEMU_QEMU_HARDDISK_94b6a137-07a9-47a7-90bd-af13afc1319f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399926 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.399930 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:32:53.399937 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.399941 | orchestrator | 2025-10-09 10:32:53.399945 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-10-09 10:32:53.399949 | orchestrator | Thursday 09 October 2025 10:21:51 +0000 (0:00:01.659) 0:00:42.461 ****** 2025-10-09 10:32:53.399953 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.399957 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.399961 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.399967 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.399971 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.399974 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.399978 | orchestrator | 2025-10-09 10:32:53.399982 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-10-09 10:32:53.399986 | orchestrator | Thursday 09 October 2025 10:21:53 +0000 (0:00:01.386) 0:00:43.848 ****** 2025-10-09 10:32:53.399989 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.399993 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.399997 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.400001 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.400004 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.400008 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.400012 | orchestrator | 2025-10-09 10:32:53.400015 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-10-09 10:32:53.400019 | orchestrator | Thursday 09 October 2025 10:21:54 +0000 (0:00:01.161) 0:00:45.010 ****** 2025-10-09 10:32:53.400023 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.400027 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.400031 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.400034 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.400038 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.400042 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.400045 | orchestrator | 2025-10-09 10:32:53.400049 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-10-09 10:32:53.400053 | orchestrator | Thursday 09 October 2025 10:21:55 +0000 (0:00:00.966) 0:00:45.980 ****** 2025-10-09 10:32:53.400057 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.400061 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.400064 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.400068 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.400072 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.400075 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.400079 | orchestrator | 2025-10-09 10:32:53.400083 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-10-09 10:32:53.400087 | orchestrator | Thursday 09 October 2025 10:21:56 +0000 (0:00:00.856) 0:00:46.836 ****** 2025-10-09 10:32:53.400091 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.400094 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.400098 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.400103 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.400107 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.400111 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.400115 | orchestrator | 2025-10-09 10:32:53.400118 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-10-09 10:32:53.400124 | orchestrator | Thursday 09 October 2025 10:21:57 +0000 (0:00:01.111) 0:00:47.947 ****** 2025-10-09 10:32:53.400128 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.400132 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.400136 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.400139 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.400143 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.400147 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.400150 | orchestrator | 2025-10-09 10:32:53.400154 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-10-09 10:32:53.400158 | orchestrator | Thursday 09 October 2025 10:21:57 +0000 (0:00:00.681) 0:00:48.629 ****** 2025-10-09 10:32:53.400162 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-10-09 10:32:53.400166 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-10-09 10:32:53.400169 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-10-09 10:32:53.400173 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-09 10:32:53.400177 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-10-09 10:32:53.400181 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-10-09 10:32:53.400185 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-10-09 10:32:53.400188 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-10-09 10:32:53.400192 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-10-09 10:32:53.400196 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-10-09 10:32:53.400199 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-10-09 10:32:53.400203 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-10-09 10:32:53.400227 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-10-09 10:32:53.400231 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-10-09 10:32:53.400234 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-10-09 10:32:53.400238 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-10-09 10:32:53.400242 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-10-09 10:32:53.400246 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-10-09 10:32:53.400249 | orchestrator | 2025-10-09 10:32:53.400253 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-10-09 10:32:53.400257 | orchestrator | Thursday 09 October 2025 10:22:02 +0000 (0:00:04.200) 0:00:52.829 ****** 2025-10-09 10:32:53.400261 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-10-09 10:32:53.400265 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-10-09 10:32:53.400268 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-10-09 10:32:53.400272 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.400276 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-10-09 10:32:53.400280 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-10-09 10:32:53.400283 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-10-09 10:32:53.400287 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.400291 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-10-09 10:32:53.400295 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-10-09 10:32:53.400298 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-10-09 10:32:53.400302 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.400308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-10-09 10:32:53.400312 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-10-09 10:32:53.400316 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-10-09 10:32:53.400319 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-10-09 10:32:53.400323 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-10-09 10:32:53.400330 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-10-09 10:32:53.400333 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.400337 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.400341 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-10-09 10:32:53.400345 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-10-09 10:32:53.400348 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-10-09 10:32:53.400352 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.400356 | orchestrator | 2025-10-09 10:32:53.400360 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-10-09 10:32:53.400363 | orchestrator | Thursday 09 October 2025 10:22:03 +0000 (0:00:01.059) 0:00:53.889 ****** 2025-10-09 10:32:53.400367 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.400371 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.400375 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.400379 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.400382 | orchestrator | 2025-10-09 10:32:53.400386 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-10-09 10:32:53.400391 | orchestrator | Thursday 09 October 2025 10:22:04 +0000 (0:00:01.726) 0:00:55.615 ****** 2025-10-09 10:32:53.400395 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.400399 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.400402 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.400406 | orchestrator | 2025-10-09 10:32:53.400410 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-10-09 10:32:53.400415 | orchestrator | Thursday 09 October 2025 10:22:05 +0000 (0:00:00.276) 0:00:55.892 ****** 2025-10-09 10:32:53.400419 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.400423 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.400427 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.400431 | orchestrator | 2025-10-09 10:32:53.400434 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-10-09 10:32:53.400438 | orchestrator | Thursday 09 October 2025 10:22:05 +0000 (0:00:00.405) 0:00:56.298 ****** 2025-10-09 10:32:53.400442 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.400446 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.400449 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.400453 | orchestrator | 2025-10-09 10:32:53.400457 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-10-09 10:32:53.400461 | orchestrator | Thursday 09 October 2025 10:22:05 +0000 (0:00:00.384) 0:00:56.682 ****** 2025-10-09 10:32:53.400465 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.400469 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.400472 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.400476 | orchestrator | 2025-10-09 10:32:53.400480 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-10-09 10:32:53.400484 | orchestrator | Thursday 09 October 2025 10:22:07 +0000 (0:00:01.549) 0:00:58.232 ****** 2025-10-09 10:32:53.400487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:32:53.400491 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:32:53.400495 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:32:53.400499 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.400502 | orchestrator | 2025-10-09 10:32:53.400506 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-10-09 10:32:53.400510 | orchestrator | Thursday 09 October 2025 10:22:08 +0000 (0:00:00.516) 0:00:58.749 ****** 2025-10-09 10:32:53.400514 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:32:53.400517 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:32:53.400523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:32:53.400527 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.400531 | orchestrator | 2025-10-09 10:32:53.400534 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-10-09 10:32:53.400538 | orchestrator | Thursday 09 October 2025 10:22:08 +0000 (0:00:00.511) 0:00:59.261 ****** 2025-10-09 10:32:53.400542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:32:53.400546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:32:53.400549 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:32:53.400553 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.400557 | orchestrator | 2025-10-09 10:32:53.400561 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-10-09 10:32:53.400564 | orchestrator | Thursday 09 October 2025 10:22:08 +0000 (0:00:00.395) 0:00:59.657 ****** 2025-10-09 10:32:53.400568 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.400572 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.400576 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.400579 | orchestrator | 2025-10-09 10:32:53.400583 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-10-09 10:32:53.400587 | orchestrator | Thursday 09 October 2025 10:22:09 +0000 (0:00:00.823) 0:01:00.480 ****** 2025-10-09 10:32:53.400591 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-10-09 10:32:53.400595 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-10-09 10:32:53.400598 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-10-09 10:32:53.400602 | orchestrator | 2025-10-09 10:32:53.400606 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-10-09 10:32:53.400610 | orchestrator | Thursday 09 October 2025 10:22:11 +0000 (0:00:01.903) 0:01:02.384 ****** 2025-10-09 10:32:53.400616 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-09 10:32:53.400620 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-09 10:32:53.400624 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-09 10:32:53.400627 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-10-09 10:32:53.400631 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-10-09 10:32:53.400635 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-10-09 10:32:53.400639 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-10-09 10:32:53.400642 | orchestrator | 2025-10-09 10:32:53.400646 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-10-09 10:32:53.400650 | orchestrator | Thursday 09 October 2025 10:22:13 +0000 (0:00:02.197) 0:01:04.581 ****** 2025-10-09 10:32:53.400654 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-09 10:32:53.400657 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-09 10:32:53.400661 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-09 10:32:53.400665 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-10-09 10:32:53.400669 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-10-09 10:32:53.400672 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-10-09 10:32:53.400676 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-10-09 10:32:53.400680 | orchestrator | 2025-10-09 10:32:53.400684 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-10-09 10:32:53.400687 | orchestrator | Thursday 09 October 2025 10:22:15 +0000 (0:00:01.964) 0:01:06.545 ****** 2025-10-09 10:32:53.400693 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.400701 | orchestrator | 2025-10-09 10:32:53.400705 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-10-09 10:32:53.400708 | orchestrator | Thursday 09 October 2025 10:22:17 +0000 (0:00:01.455) 0:01:08.001 ****** 2025-10-09 10:32:53.400712 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.400716 | orchestrator | 2025-10-09 10:32:53.400720 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-10-09 10:32:53.400724 | orchestrator | Thursday 09 October 2025 10:22:18 +0000 (0:00:01.152) 0:01:09.153 ****** 2025-10-09 10:32:53.400728 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.400731 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.400735 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.400739 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.400743 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.400747 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.400750 | orchestrator | 2025-10-09 10:32:53.400754 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-10-09 10:32:53.400758 | orchestrator | Thursday 09 October 2025 10:22:19 +0000 (0:00:01.122) 0:01:10.276 ****** 2025-10-09 10:32:53.400762 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.400765 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.400769 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.400773 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.400777 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.400780 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.400784 | orchestrator | 2025-10-09 10:32:53.400788 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-10-09 10:32:53.400792 | orchestrator | Thursday 09 October 2025 10:22:21 +0000 (0:00:01.905) 0:01:12.181 ****** 2025-10-09 10:32:53.400795 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.400799 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.400803 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.400807 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.400810 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.400814 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.400818 | orchestrator | 2025-10-09 10:32:53.400822 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-10-09 10:32:53.400826 | orchestrator | Thursday 09 October 2025 10:22:22 +0000 (0:00:01.481) 0:01:13.663 ****** 2025-10-09 10:32:53.400829 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.400833 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.400837 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.400841 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.400844 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.400848 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.400852 | orchestrator | 2025-10-09 10:32:53.400856 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-10-09 10:32:53.400859 | orchestrator | Thursday 09 October 2025 10:22:24 +0000 (0:00:01.359) 0:01:15.022 ****** 2025-10-09 10:32:53.400863 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.400867 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.400871 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.400874 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.400878 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.400882 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.400886 | orchestrator | 2025-10-09 10:32:53.400890 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-10-09 10:32:53.400893 | orchestrator | Thursday 09 October 2025 10:22:25 +0000 (0:00:01.601) 0:01:16.624 ****** 2025-10-09 10:32:53.400899 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.400903 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.400911 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.400915 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.400918 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.400922 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.400926 | orchestrator | 2025-10-09 10:32:53.400930 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-10-09 10:32:53.400933 | orchestrator | Thursday 09 October 2025 10:22:26 +0000 (0:00:01.002) 0:01:17.626 ****** 2025-10-09 10:32:53.400937 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.400941 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.400945 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.400948 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.400952 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.400956 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.400959 | orchestrator | 2025-10-09 10:32:53.400963 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-10-09 10:32:53.400967 | orchestrator | Thursday 09 October 2025 10:22:27 +0000 (0:00:00.759) 0:01:18.386 ****** 2025-10-09 10:32:53.400971 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.400975 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.400978 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.400982 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.400986 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.400990 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.400993 | orchestrator | 2025-10-09 10:32:53.400997 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-10-09 10:32:53.401001 | orchestrator | Thursday 09 October 2025 10:22:29 +0000 (0:00:01.619) 0:01:20.005 ****** 2025-10-09 10:32:53.401005 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.401008 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.401012 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.401016 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.401019 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.401023 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.401027 | orchestrator | 2025-10-09 10:32:53.401031 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-10-09 10:32:53.401034 | orchestrator | Thursday 09 October 2025 10:22:30 +0000 (0:00:01.167) 0:01:21.173 ****** 2025-10-09 10:32:53.401038 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.401044 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.401048 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.401051 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.401055 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.401059 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.401062 | orchestrator | 2025-10-09 10:32:53.401066 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-10-09 10:32:53.401070 | orchestrator | Thursday 09 October 2025 10:22:31 +0000 (0:00:00.931) 0:01:22.104 ****** 2025-10-09 10:32:53.401074 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.401078 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.401081 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.401085 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.401089 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.401092 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.401096 | orchestrator | 2025-10-09 10:32:53.401100 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-10-09 10:32:53.401104 | orchestrator | Thursday 09 October 2025 10:22:32 +0000 (0:00:00.749) 0:01:22.853 ****** 2025-10-09 10:32:53.401107 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.401111 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.401115 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.401119 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.401122 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.401126 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.401132 | orchestrator | 2025-10-09 10:32:53.401136 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-10-09 10:32:53.401140 | orchestrator | Thursday 09 October 2025 10:22:33 +0000 (0:00:01.146) 0:01:24.000 ****** 2025-10-09 10:32:53.401144 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.401147 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.401151 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.401155 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.401158 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.401162 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.401166 | orchestrator | 2025-10-09 10:32:53.401170 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-10-09 10:32:53.401174 | orchestrator | Thursday 09 October 2025 10:22:33 +0000 (0:00:00.627) 0:01:24.628 ****** 2025-10-09 10:32:53.401177 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.401181 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.401185 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.401189 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.401192 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.401196 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.401200 | orchestrator | 2025-10-09 10:32:53.401204 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-10-09 10:32:53.401235 | orchestrator | Thursday 09 October 2025 10:22:34 +0000 (0:00:00.913) 0:01:25.542 ****** 2025-10-09 10:32:53.401239 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.401243 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.401247 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.401250 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.401254 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.401258 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.401262 | orchestrator | 2025-10-09 10:32:53.401265 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-10-09 10:32:53.401269 | orchestrator | Thursday 09 October 2025 10:22:35 +0000 (0:00:00.705) 0:01:26.247 ****** 2025-10-09 10:32:53.401273 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.401277 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.401281 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.401285 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.401288 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.401292 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.401296 | orchestrator | 2025-10-09 10:32:53.401300 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-10-09 10:32:53.401306 | orchestrator | Thursday 09 October 2025 10:22:36 +0000 (0:00:00.826) 0:01:27.074 ****** 2025-10-09 10:32:53.401310 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.401314 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.401318 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.401321 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.401325 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.401329 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.401333 | orchestrator | 2025-10-09 10:32:53.401336 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-10-09 10:32:53.401340 | orchestrator | Thursday 09 October 2025 10:22:37 +0000 (0:00:00.650) 0:01:27.725 ****** 2025-10-09 10:32:53.401344 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.401348 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.401352 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.401355 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.401359 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.401363 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.401367 | orchestrator | 2025-10-09 10:32:53.401370 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-10-09 10:32:53.401374 | orchestrator | Thursday 09 October 2025 10:22:37 +0000 (0:00:00.909) 0:01:28.634 ****** 2025-10-09 10:32:53.401383 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.401386 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.401390 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.401394 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.401398 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.401401 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.401405 | orchestrator | 2025-10-09 10:32:53.401409 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-10-09 10:32:53.401413 | orchestrator | Thursday 09 October 2025 10:22:39 +0000 (0:00:01.335) 0:01:29.970 ****** 2025-10-09 10:32:53.401417 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.401420 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.401424 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.401428 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.401432 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.401436 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.401439 | orchestrator | 2025-10-09 10:32:53.401443 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-10-09 10:32:53.401447 | orchestrator | Thursday 09 October 2025 10:22:40 +0000 (0:00:01.699) 0:01:31.669 ****** 2025-10-09 10:32:53.401452 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.401456 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.401460 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.401464 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.401467 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.401471 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.401475 | orchestrator | 2025-10-09 10:32:53.401479 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-10-09 10:32:53.401482 | orchestrator | Thursday 09 October 2025 10:22:43 +0000 (0:00:02.406) 0:01:34.076 ****** 2025-10-09 10:32:53.401486 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.401490 | orchestrator | 2025-10-09 10:32:53.401494 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-10-09 10:32:53.401498 | orchestrator | Thursday 09 October 2025 10:22:44 +0000 (0:00:01.253) 0:01:35.330 ****** 2025-10-09 10:32:53.401501 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.401505 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.401509 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.401513 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.401516 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.401520 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.401524 | orchestrator | 2025-10-09 10:32:53.401528 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-10-09 10:32:53.401532 | orchestrator | Thursday 09 October 2025 10:22:45 +0000 (0:00:00.686) 0:01:36.016 ****** 2025-10-09 10:32:53.401535 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.401539 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.401543 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.401547 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.401550 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.401554 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.401558 | orchestrator | 2025-10-09 10:32:53.401562 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-10-09 10:32:53.401566 | orchestrator | Thursday 09 October 2025 10:22:46 +0000 (0:00:00.919) 0:01:36.935 ****** 2025-10-09 10:32:53.401569 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-10-09 10:32:53.401573 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-10-09 10:32:53.401577 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-10-09 10:32:53.401581 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-10-09 10:32:53.401587 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-10-09 10:32:53.401591 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-10-09 10:32:53.401595 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-10-09 10:32:53.401599 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-10-09 10:32:53.401603 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-10-09 10:32:53.401606 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-10-09 10:32:53.401610 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-10-09 10:32:53.401614 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-10-09 10:32:53.401618 | orchestrator | 2025-10-09 10:32:53.401624 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-10-09 10:32:53.401628 | orchestrator | Thursday 09 October 2025 10:22:47 +0000 (0:00:01.397) 0:01:38.333 ****** 2025-10-09 10:32:53.401631 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.401635 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.401639 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.401643 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.401646 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.401650 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.401654 | orchestrator | 2025-10-09 10:32:53.401658 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-10-09 10:32:53.401662 | orchestrator | Thursday 09 October 2025 10:22:48 +0000 (0:00:01.266) 0:01:39.600 ****** 2025-10-09 10:32:53.401665 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.401669 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.401673 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.401677 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.401680 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.401684 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.401688 | orchestrator | 2025-10-09 10:32:53.401692 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-10-09 10:32:53.401695 | orchestrator | Thursday 09 October 2025 10:22:49 +0000 (0:00:00.935) 0:01:40.536 ****** 2025-10-09 10:32:53.401699 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.401703 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.401707 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.401710 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.401714 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.401718 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.401722 | orchestrator | 2025-10-09 10:32:53.401725 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-10-09 10:32:53.401729 | orchestrator | Thursday 09 October 2025 10:22:50 +0000 (0:00:00.878) 0:01:41.415 ****** 2025-10-09 10:32:53.401733 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.401737 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.401740 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.401744 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.401748 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.401753 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.401757 | orchestrator | 2025-10-09 10:32:53.401761 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-10-09 10:32:53.401765 | orchestrator | Thursday 09 October 2025 10:22:51 +0000 (0:00:00.615) 0:01:42.030 ****** 2025-10-09 10:32:53.401769 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.401773 | orchestrator | 2025-10-09 10:32:53.401776 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-10-09 10:32:53.401782 | orchestrator | Thursday 09 October 2025 10:22:52 +0000 (0:00:01.223) 0:01:43.254 ****** 2025-10-09 10:32:53.401786 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.401790 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.401794 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.401797 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.401801 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.401805 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.401809 | orchestrator | 2025-10-09 10:32:53.401812 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-10-09 10:32:53.401816 | orchestrator | Thursday 09 October 2025 10:23:43 +0000 (0:00:51.014) 0:02:34.268 ****** 2025-10-09 10:32:53.401820 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-10-09 10:32:53.401824 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-10-09 10:32:53.401828 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-10-09 10:32:53.401832 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.401835 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-10-09 10:32:53.401839 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-10-09 10:32:53.401843 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-10-09 10:32:53.401847 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.401851 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-10-09 10:32:53.401854 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-10-09 10:32:53.401858 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-10-09 10:32:53.401862 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.401866 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-10-09 10:32:53.401870 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-10-09 10:32:53.401873 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-10-09 10:32:53.401877 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.401881 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-10-09 10:32:53.401885 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-10-09 10:32:53.401888 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-10-09 10:32:53.401892 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.401896 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-10-09 10:32:53.401900 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-10-09 10:32:53.401903 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-10-09 10:32:53.401909 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.401913 | orchestrator | 2025-10-09 10:32:53.401917 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-10-09 10:32:53.401921 | orchestrator | Thursday 09 October 2025 10:23:44 +0000 (0:00:00.900) 0:02:35.169 ****** 2025-10-09 10:32:53.401924 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.401928 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.401932 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.401936 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.401939 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.401943 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.401947 | orchestrator | 2025-10-09 10:32:53.401951 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-10-09 10:32:53.401954 | orchestrator | Thursday 09 October 2025 10:23:45 +0000 (0:00:00.766) 0:02:35.936 ****** 2025-10-09 10:32:53.401961 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.401964 | orchestrator | 2025-10-09 10:32:53.401968 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-10-09 10:32:53.401972 | orchestrator | Thursday 09 October 2025 10:23:45 +0000 (0:00:00.408) 0:02:36.345 ****** 2025-10-09 10:32:53.401976 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.401980 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.401983 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.401987 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.401991 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.401994 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.401998 | orchestrator | 2025-10-09 10:32:53.402002 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-10-09 10:32:53.402006 | orchestrator | Thursday 09 October 2025 10:23:46 +0000 (0:00:00.848) 0:02:37.193 ****** 2025-10-09 10:32:53.402010 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.402013 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.402045 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.402049 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.402053 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.402057 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.402061 | orchestrator | 2025-10-09 10:32:53.402065 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-10-09 10:32:53.402071 | orchestrator | Thursday 09 October 2025 10:23:47 +0000 (0:00:00.971) 0:02:38.165 ****** 2025-10-09 10:32:53.402075 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.402078 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.402082 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.402086 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.402090 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.402093 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.402097 | orchestrator | 2025-10-09 10:32:53.402101 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-10-09 10:32:53.402104 | orchestrator | Thursday 09 October 2025 10:23:48 +0000 (0:00:00.711) 0:02:38.876 ****** 2025-10-09 10:32:53.402108 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.402112 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.402116 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.402119 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.402123 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.402127 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.402131 | orchestrator | 2025-10-09 10:32:53.402134 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-10-09 10:32:53.402138 | orchestrator | Thursday 09 October 2025 10:23:50 +0000 (0:00:02.740) 0:02:41.617 ****** 2025-10-09 10:32:53.402142 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.402146 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.402149 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.402153 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.402157 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.402160 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.402164 | orchestrator | 2025-10-09 10:32:53.402168 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-10-09 10:32:53.402172 | orchestrator | Thursday 09 October 2025 10:23:51 +0000 (0:00:00.638) 0:02:42.256 ****** 2025-10-09 10:32:53.402176 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.402181 | orchestrator | 2025-10-09 10:32:53.402184 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-10-09 10:32:53.402188 | orchestrator | Thursday 09 October 2025 10:23:52 +0000 (0:00:01.328) 0:02:43.584 ****** 2025-10-09 10:32:53.402192 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.402196 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.402202 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.402231 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.402236 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.402240 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.402244 | orchestrator | 2025-10-09 10:32:53.402247 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-10-09 10:32:53.402251 | orchestrator | Thursday 09 October 2025 10:23:53 +0000 (0:00:00.719) 0:02:44.303 ****** 2025-10-09 10:32:53.402255 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.402259 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.402262 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.402266 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.402270 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.402274 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.402277 | orchestrator | 2025-10-09 10:32:53.402281 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-10-09 10:32:53.402285 | orchestrator | Thursday 09 October 2025 10:23:54 +0000 (0:00:01.047) 0:02:45.351 ****** 2025-10-09 10:32:53.402289 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.402292 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.402296 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.402300 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.402304 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.402307 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.402311 | orchestrator | 2025-10-09 10:32:53.402315 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-10-09 10:32:53.402322 | orchestrator | Thursday 09 October 2025 10:23:55 +0000 (0:00:00.776) 0:02:46.127 ****** 2025-10-09 10:32:53.402326 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.402330 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.402334 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.402337 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.402341 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.402345 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.402349 | orchestrator | 2025-10-09 10:32:53.402352 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-10-09 10:32:53.402356 | orchestrator | Thursday 09 October 2025 10:23:56 +0000 (0:00:01.308) 0:02:47.436 ****** 2025-10-09 10:32:53.402360 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.402364 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.402367 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.402371 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.402375 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.402379 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.402382 | orchestrator | 2025-10-09 10:32:53.402386 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-10-09 10:32:53.402390 | orchestrator | Thursday 09 October 2025 10:23:57 +0000 (0:00:00.777) 0:02:48.213 ****** 2025-10-09 10:32:53.402394 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.402397 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.402401 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.402405 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.402409 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.402412 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.402416 | orchestrator | 2025-10-09 10:32:53.402420 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-10-09 10:32:53.402424 | orchestrator | Thursday 09 October 2025 10:23:58 +0000 (0:00:01.212) 0:02:49.426 ****** 2025-10-09 10:32:53.402427 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.402431 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.402435 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.402439 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.402442 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.402451 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.402455 | orchestrator | 2025-10-09 10:32:53.402461 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-10-09 10:32:53.402465 | orchestrator | Thursday 09 October 2025 10:23:59 +0000 (0:00:00.758) 0:02:50.184 ****** 2025-10-09 10:32:53.402469 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.402472 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.402476 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.402480 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.402484 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.402487 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.402491 | orchestrator | 2025-10-09 10:32:53.402495 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-10-09 10:32:53.402499 | orchestrator | Thursday 09 October 2025 10:24:00 +0000 (0:00:01.217) 0:02:51.402 ****** 2025-10-09 10:32:53.402503 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.402506 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.402510 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.402514 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.402518 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.402522 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.402525 | orchestrator | 2025-10-09 10:32:53.402529 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-10-09 10:32:53.402533 | orchestrator | Thursday 09 October 2025 10:24:02 +0000 (0:00:01.832) 0:02:53.234 ****** 2025-10-09 10:32:53.402537 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.402541 | orchestrator | 2025-10-09 10:32:53.402544 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-10-09 10:32:53.402548 | orchestrator | Thursday 09 October 2025 10:24:04 +0000 (0:00:01.554) 0:02:54.789 ****** 2025-10-09 10:32:53.402552 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-10-09 10:32:53.402556 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-10-09 10:32:53.402560 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-10-09 10:32:53.402563 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-10-09 10:32:53.402567 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-10-09 10:32:53.402571 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-10-09 10:32:53.402575 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-10-09 10:32:53.402578 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-10-09 10:32:53.402582 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-10-09 10:32:53.402586 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-10-09 10:32:53.402590 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-10-09 10:32:53.402593 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-10-09 10:32:53.402597 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-10-09 10:32:53.402601 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-10-09 10:32:53.402605 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-10-09 10:32:53.402608 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-10-09 10:32:53.402612 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-10-09 10:32:53.402616 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-10-09 10:32:53.402620 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-10-09 10:32:53.402623 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-10-09 10:32:53.402627 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-10-09 10:32:53.402638 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-10-09 10:32:53.402645 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-10-09 10:32:53.402649 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-10-09 10:32:53.402653 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-10-09 10:32:53.402656 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-10-09 10:32:53.402660 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-10-09 10:32:53.402664 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-10-09 10:32:53.402668 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-10-09 10:32:53.402672 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-10-09 10:32:53.402675 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-10-09 10:32:53.402679 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-10-09 10:32:53.402683 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-10-09 10:32:53.402687 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-10-09 10:32:53.402690 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-10-09 10:32:53.402694 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-10-09 10:32:53.402698 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-10-09 10:32:53.402702 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-10-09 10:32:53.402705 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-10-09 10:32:53.402709 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-10-09 10:32:53.402713 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-10-09 10:32:53.402717 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-10-09 10:32:53.402721 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-10-09 10:32:53.402724 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-10-09 10:32:53.402728 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-10-09 10:32:53.402733 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-10-09 10:32:53.402737 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-10-09 10:32:53.402741 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-10-09 10:32:53.402745 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-10-09 10:32:53.402749 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-10-09 10:32:53.402752 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-10-09 10:32:53.402756 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-10-09 10:32:53.402760 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-10-09 10:32:53.402764 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-10-09 10:32:53.402768 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-10-09 10:32:53.402771 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-10-09 10:32:53.402775 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-10-09 10:32:53.402779 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-10-09 10:32:53.402783 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-10-09 10:32:53.402787 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-10-09 10:32:53.402790 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-10-09 10:32:53.402794 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-10-09 10:32:53.402798 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-10-09 10:32:53.402802 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-10-09 10:32:53.402810 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-10-09 10:32:53.402814 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-10-09 10:32:53.402818 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-10-09 10:32:53.402821 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-10-09 10:32:53.402825 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-10-09 10:32:53.402829 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-10-09 10:32:53.402833 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-10-09 10:32:53.402837 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-10-09 10:32:53.402840 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-10-09 10:32:53.402844 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-10-09 10:32:53.402848 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-10-09 10:32:53.402852 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-10-09 10:32:53.402856 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-10-09 10:32:53.402859 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-10-09 10:32:53.402863 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-10-09 10:32:53.402869 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-10-09 10:32:53.402873 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-10-09 10:32:53.402877 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-10-09 10:32:53.402881 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-10-09 10:32:53.402885 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-10-09 10:32:53.402888 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-10-09 10:32:53.402892 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-10-09 10:32:53.402896 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-10-09 10:32:53.402899 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-10-09 10:32:53.402903 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-10-09 10:32:53.402907 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-10-09 10:32:53.402911 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-10-09 10:32:53.402915 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-10-09 10:32:53.402918 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-10-09 10:32:53.402922 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-10-09 10:32:53.402926 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-10-09 10:32:53.402929 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-10-09 10:32:53.402933 | orchestrator | 2025-10-09 10:32:53.402937 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-10-09 10:32:53.402941 | orchestrator | Thursday 09 October 2025 10:24:11 +0000 (0:00:07.056) 0:03:01.846 ****** 2025-10-09 10:32:53.402944 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.402948 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.402952 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.402956 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.402960 | orchestrator | 2025-10-09 10:32:53.402965 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-10-09 10:32:53.402969 | orchestrator | Thursday 09 October 2025 10:24:12 +0000 (0:00:01.289) 0:03:03.135 ****** 2025-10-09 10:32:53.402975 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-10-09 10:32:53.402980 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-10-09 10:32:53.402984 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-10-09 10:32:53.402988 | orchestrator | 2025-10-09 10:32:53.402991 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-10-09 10:32:53.402995 | orchestrator | Thursday 09 October 2025 10:24:13 +0000 (0:00:01.315) 0:03:04.451 ****** 2025-10-09 10:32:53.402999 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-10-09 10:32:53.403003 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-10-09 10:32:53.403007 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-10-09 10:32:53.403010 | orchestrator | 2025-10-09 10:32:53.403014 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-10-09 10:32:53.403018 | orchestrator | Thursday 09 October 2025 10:24:16 +0000 (0:00:02.398) 0:03:06.849 ****** 2025-10-09 10:32:53.403022 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403025 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403029 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403033 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.403037 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.403040 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.403044 | orchestrator | 2025-10-09 10:32:53.403048 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-10-09 10:32:53.403052 | orchestrator | Thursday 09 October 2025 10:24:16 +0000 (0:00:00.738) 0:03:07.588 ****** 2025-10-09 10:32:53.403055 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403059 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403063 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403067 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.403070 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.403074 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.403078 | orchestrator | 2025-10-09 10:32:53.403082 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-10-09 10:32:53.403086 | orchestrator | Thursday 09 October 2025 10:24:17 +0000 (0:00:01.093) 0:03:08.682 ****** 2025-10-09 10:32:53.403089 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403093 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403097 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403100 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.403104 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.403108 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.403112 | orchestrator | 2025-10-09 10:32:53.403115 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-10-09 10:32:53.403119 | orchestrator | Thursday 09 October 2025 10:24:18 +0000 (0:00:00.941) 0:03:09.623 ****** 2025-10-09 10:32:53.403123 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403127 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403133 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403137 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.403140 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.403144 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.403148 | orchestrator | 2025-10-09 10:32:53.403152 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-10-09 10:32:53.403155 | orchestrator | Thursday 09 October 2025 10:24:19 +0000 (0:00:01.056) 0:03:10.679 ****** 2025-10-09 10:32:53.403161 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403165 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403169 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403173 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.403176 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.403180 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.403184 | orchestrator | 2025-10-09 10:32:53.403187 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-10-09 10:32:53.403191 | orchestrator | Thursday 09 October 2025 10:24:21 +0000 (0:00:01.349) 0:03:12.029 ****** 2025-10-09 10:32:53.403195 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403199 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403203 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403215 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.403219 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.403223 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.403226 | orchestrator | 2025-10-09 10:32:53.403230 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-10-09 10:32:53.403234 | orchestrator | Thursday 09 October 2025 10:24:22 +0000 (0:00:00.765) 0:03:12.795 ****** 2025-10-09 10:32:53.403238 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403241 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403245 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403249 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.403252 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.403256 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.403260 | orchestrator | 2025-10-09 10:32:53.403264 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-10-09 10:32:53.403269 | orchestrator | Thursday 09 October 2025 10:24:23 +0000 (0:00:01.129) 0:03:13.925 ****** 2025-10-09 10:32:53.403273 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403277 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403280 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403284 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.403288 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.403291 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.403295 | orchestrator | 2025-10-09 10:32:53.403299 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-10-09 10:32:53.403303 | orchestrator | Thursday 09 October 2025 10:24:23 +0000 (0:00:00.667) 0:03:14.593 ****** 2025-10-09 10:32:53.403306 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403310 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403314 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403317 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.403321 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.403325 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.403329 | orchestrator | 2025-10-09 10:32:53.403332 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-10-09 10:32:53.403336 | orchestrator | Thursday 09 October 2025 10:24:27 +0000 (0:00:03.432) 0:03:18.026 ****** 2025-10-09 10:32:53.403340 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403344 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403347 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403351 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.403355 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.403359 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.403362 | orchestrator | 2025-10-09 10:32:53.403366 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-10-09 10:32:53.403370 | orchestrator | Thursday 09 October 2025 10:24:28 +0000 (0:00:00.847) 0:03:18.873 ****** 2025-10-09 10:32:53.403374 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403377 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403383 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403387 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.403390 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.403394 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.403398 | orchestrator | 2025-10-09 10:32:53.403402 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-10-09 10:32:53.403405 | orchestrator | Thursday 09 October 2025 10:24:29 +0000 (0:00:01.237) 0:03:20.111 ****** 2025-10-09 10:32:53.403409 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403413 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403416 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403420 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.403424 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.403428 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.403431 | orchestrator | 2025-10-09 10:32:53.403435 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-10-09 10:32:53.403439 | orchestrator | Thursday 09 October 2025 10:24:30 +0000 (0:00:00.829) 0:03:20.940 ****** 2025-10-09 10:32:53.403443 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403446 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403450 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403454 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-10-09 10:32:53.403458 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-10-09 10:32:53.403462 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-10-09 10:32:53.403465 | orchestrator | 2025-10-09 10:32:53.403469 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-10-09 10:32:53.403475 | orchestrator | Thursday 09 October 2025 10:24:31 +0000 (0:00:01.248) 0:03:22.189 ****** 2025-10-09 10:32:53.403479 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403482 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403486 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403490 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-10-09 10:32:53.403495 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-10-09 10:32:53.403501 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-10-09 10:32:53.403505 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.403509 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-10-09 10:32:53.403513 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.403518 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-10-09 10:32:53.403524 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-10-09 10:32:53.403528 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.403532 | orchestrator | 2025-10-09 10:32:53.403535 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-10-09 10:32:53.403539 | orchestrator | Thursday 09 October 2025 10:24:32 +0000 (0:00:00.895) 0:03:23.085 ****** 2025-10-09 10:32:53.403543 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403547 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403550 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403554 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.403558 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.403561 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.403565 | orchestrator | 2025-10-09 10:32:53.403569 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-10-09 10:32:53.403572 | orchestrator | Thursday 09 October 2025 10:24:33 +0000 (0:00:00.998) 0:03:24.083 ****** 2025-10-09 10:32:53.403576 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403580 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403584 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403587 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.403591 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.403595 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.403598 | orchestrator | 2025-10-09 10:32:53.403602 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-10-09 10:32:53.403606 | orchestrator | Thursday 09 October 2025 10:24:34 +0000 (0:00:00.819) 0:03:24.903 ****** 2025-10-09 10:32:53.403610 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403613 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403617 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403621 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.403624 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.403628 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.403632 | orchestrator | 2025-10-09 10:32:53.403636 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-10-09 10:32:53.403639 | orchestrator | Thursday 09 October 2025 10:24:35 +0000 (0:00:01.298) 0:03:26.202 ****** 2025-10-09 10:32:53.403643 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403647 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403650 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403654 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.403658 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.403661 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.403665 | orchestrator | 2025-10-09 10:32:53.403669 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-10-09 10:32:53.403673 | orchestrator | Thursday 09 October 2025 10:24:36 +0000 (0:00:00.745) 0:03:26.947 ****** 2025-10-09 10:32:53.403676 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403680 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403684 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403690 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.403694 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.403697 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.403701 | orchestrator | 2025-10-09 10:32:53.403705 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-10-09 10:32:53.403709 | orchestrator | Thursday 09 October 2025 10:24:37 +0000 (0:00:00.906) 0:03:27.854 ****** 2025-10-09 10:32:53.403712 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403718 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403722 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403725 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.403729 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.403733 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.403737 | orchestrator | 2025-10-09 10:32:53.403740 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-10-09 10:32:53.403744 | orchestrator | Thursday 09 October 2025 10:24:38 +0000 (0:00:00.971) 0:03:28.825 ****** 2025-10-09 10:32:53.403748 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-10-09 10:32:53.403752 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-10-09 10:32:53.403755 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-10-09 10:32:53.403759 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403763 | orchestrator | 2025-10-09 10:32:53.403766 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-10-09 10:32:53.403770 | orchestrator | Thursday 09 October 2025 10:24:38 +0000 (0:00:00.745) 0:03:29.570 ****** 2025-10-09 10:32:53.403774 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-10-09 10:32:53.403778 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-10-09 10:32:53.403781 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-10-09 10:32:53.403785 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403789 | orchestrator | 2025-10-09 10:32:53.403792 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-10-09 10:32:53.403796 | orchestrator | Thursday 09 October 2025 10:24:39 +0000 (0:00:00.745) 0:03:30.315 ****** 2025-10-09 10:32:53.403800 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-10-09 10:32:53.403805 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-10-09 10:32:53.403809 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-10-09 10:32:53.403813 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403817 | orchestrator | 2025-10-09 10:32:53.403820 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-10-09 10:32:53.403824 | orchestrator | Thursday 09 October 2025 10:24:40 +0000 (0:00:01.125) 0:03:31.440 ****** 2025-10-09 10:32:53.403828 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403831 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403835 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403839 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.403843 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.403846 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.403850 | orchestrator | 2025-10-09 10:32:53.403854 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-10-09 10:32:53.403857 | orchestrator | Thursday 09 October 2025 10:24:41 +0000 (0:00:00.862) 0:03:32.303 ****** 2025-10-09 10:32:53.403861 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-10-09 10:32:53.403865 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.403869 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-10-09 10:32:53.403872 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-10-09 10:32:53.403876 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.403880 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.403884 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-10-09 10:32:53.403887 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-10-09 10:32:53.403891 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-10-09 10:32:53.403895 | orchestrator | 2025-10-09 10:32:53.403898 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-10-09 10:32:53.403902 | orchestrator | Thursday 09 October 2025 10:24:44 +0000 (0:00:03.039) 0:03:35.343 ****** 2025-10-09 10:32:53.403906 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.403910 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.403913 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.403919 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.403923 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.403926 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.403930 | orchestrator | 2025-10-09 10:32:53.403934 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-10-09 10:32:53.403938 | orchestrator | Thursday 09 October 2025 10:24:48 +0000 (0:00:03.688) 0:03:39.031 ****** 2025-10-09 10:32:53.403941 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.403945 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.403949 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.403952 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.403956 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.403960 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.403963 | orchestrator | 2025-10-09 10:32:53.403967 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-10-09 10:32:53.403971 | orchestrator | Thursday 09 October 2025 10:24:49 +0000 (0:00:01.326) 0:03:40.358 ****** 2025-10-09 10:32:53.403975 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.403978 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.403982 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.403986 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:32:53.403990 | orchestrator | 2025-10-09 10:32:53.403993 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-10-09 10:32:53.403997 | orchestrator | Thursday 09 October 2025 10:24:51 +0000 (0:00:01.484) 0:03:41.842 ****** 2025-10-09 10:32:53.404001 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.404005 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.404008 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.404012 | orchestrator | 2025-10-09 10:32:53.404016 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-10-09 10:32:53.404022 | orchestrator | Thursday 09 October 2025 10:24:51 +0000 (0:00:00.560) 0:03:42.402 ****** 2025-10-09 10:32:53.404026 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.404030 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.404033 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.404037 | orchestrator | 2025-10-09 10:32:53.404041 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-10-09 10:32:53.404044 | orchestrator | Thursday 09 October 2025 10:24:53 +0000 (0:00:01.467) 0:03:43.870 ****** 2025-10-09 10:32:53.404048 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-10-09 10:32:53.404052 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-10-09 10:32:53.404056 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-10-09 10:32:53.404059 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.404063 | orchestrator | 2025-10-09 10:32:53.404067 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-10-09 10:32:53.404071 | orchestrator | Thursday 09 October 2025 10:24:54 +0000 (0:00:00.963) 0:03:44.834 ****** 2025-10-09 10:32:53.404074 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.404078 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.404082 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.404085 | orchestrator | 2025-10-09 10:32:53.404089 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-10-09 10:32:53.404093 | orchestrator | Thursday 09 October 2025 10:24:54 +0000 (0:00:00.714) 0:03:45.548 ****** 2025-10-09 10:32:53.404097 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.404100 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.404104 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.404108 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.404112 | orchestrator | 2025-10-09 10:32:53.404115 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-10-09 10:32:53.404121 | orchestrator | Thursday 09 October 2025 10:24:55 +0000 (0:00:00.990) 0:03:46.538 ****** 2025-10-09 10:32:53.404125 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:32:53.404129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:32:53.404134 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:32:53.404138 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.404142 | orchestrator | 2025-10-09 10:32:53.404146 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-10-09 10:32:53.404149 | orchestrator | Thursday 09 October 2025 10:24:56 +0000 (0:00:00.826) 0:03:47.365 ****** 2025-10-09 10:32:53.404153 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.404157 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.404161 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.404164 | orchestrator | 2025-10-09 10:32:53.404168 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-10-09 10:32:53.404172 | orchestrator | Thursday 09 October 2025 10:24:57 +0000 (0:00:00.759) 0:03:48.125 ****** 2025-10-09 10:32:53.404176 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.404179 | orchestrator | 2025-10-09 10:32:53.404183 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-10-09 10:32:53.404187 | orchestrator | Thursday 09 October 2025 10:24:57 +0000 (0:00:00.314) 0:03:48.439 ****** 2025-10-09 10:32:53.404191 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.404194 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.404198 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.404202 | orchestrator | 2025-10-09 10:32:53.404213 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-10-09 10:32:53.404217 | orchestrator | Thursday 09 October 2025 10:24:58 +0000 (0:00:00.508) 0:03:48.948 ****** 2025-10-09 10:32:53.404221 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.404225 | orchestrator | 2025-10-09 10:32:53.404229 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-10-09 10:32:53.404232 | orchestrator | Thursday 09 October 2025 10:24:58 +0000 (0:00:00.276) 0:03:49.224 ****** 2025-10-09 10:32:53.404236 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.404240 | orchestrator | 2025-10-09 10:32:53.404244 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-10-09 10:32:53.404247 | orchestrator | Thursday 09 October 2025 10:24:58 +0000 (0:00:00.248) 0:03:49.472 ****** 2025-10-09 10:32:53.404251 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.404255 | orchestrator | 2025-10-09 10:32:53.404259 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-10-09 10:32:53.404263 | orchestrator | Thursday 09 October 2025 10:24:58 +0000 (0:00:00.122) 0:03:49.595 ****** 2025-10-09 10:32:53.404266 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.404270 | orchestrator | 2025-10-09 10:32:53.404274 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-10-09 10:32:53.404278 | orchestrator | Thursday 09 October 2025 10:24:59 +0000 (0:00:00.452) 0:03:50.047 ****** 2025-10-09 10:32:53.404281 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.404285 | orchestrator | 2025-10-09 10:32:53.404289 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-10-09 10:32:53.404292 | orchestrator | Thursday 09 October 2025 10:24:59 +0000 (0:00:00.267) 0:03:50.315 ****** 2025-10-09 10:32:53.404296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:32:53.404300 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:32:53.404304 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:32:53.404308 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.404311 | orchestrator | 2025-10-09 10:32:53.404315 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-10-09 10:32:53.404319 | orchestrator | Thursday 09 October 2025 10:25:00 +0000 (0:00:00.786) 0:03:51.102 ****** 2025-10-09 10:32:53.404325 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.404329 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.404333 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.404336 | orchestrator | 2025-10-09 10:32:53.404342 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-10-09 10:32:53.404346 | orchestrator | Thursday 09 October 2025 10:25:01 +0000 (0:00:00.780) 0:03:51.882 ****** 2025-10-09 10:32:53.404350 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.404354 | orchestrator | 2025-10-09 10:32:53.404358 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-10-09 10:32:53.404362 | orchestrator | Thursday 09 October 2025 10:25:01 +0000 (0:00:00.237) 0:03:52.120 ****** 2025-10-09 10:32:53.404366 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.404369 | orchestrator | 2025-10-09 10:32:53.404373 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-10-09 10:32:53.404377 | orchestrator | Thursday 09 October 2025 10:25:01 +0000 (0:00:00.315) 0:03:52.436 ****** 2025-10-09 10:32:53.404381 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.404385 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.404388 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.404392 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.404396 | orchestrator | 2025-10-09 10:32:53.404400 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-10-09 10:32:53.404404 | orchestrator | Thursday 09 October 2025 10:25:03 +0000 (0:00:02.047) 0:03:54.483 ****** 2025-10-09 10:32:53.404407 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.404411 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.404415 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.404419 | orchestrator | 2025-10-09 10:32:53.404423 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-10-09 10:32:53.404426 | orchestrator | Thursday 09 October 2025 10:25:04 +0000 (0:00:00.705) 0:03:55.189 ****** 2025-10-09 10:32:53.404430 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.404434 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.404438 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.404442 | orchestrator | 2025-10-09 10:32:53.404445 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-10-09 10:32:53.404449 | orchestrator | Thursday 09 October 2025 10:25:06 +0000 (0:00:02.431) 0:03:57.620 ****** 2025-10-09 10:32:53.404453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:32:53.404459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:32:53.404463 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:32:53.404466 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.404470 | orchestrator | 2025-10-09 10:32:53.404474 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-10-09 10:32:53.404478 | orchestrator | Thursday 09 October 2025 10:25:07 +0000 (0:00:00.918) 0:03:58.539 ****** 2025-10-09 10:32:53.404481 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.404485 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.404489 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.404493 | orchestrator | 2025-10-09 10:32:53.404497 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-10-09 10:32:53.404500 | orchestrator | Thursday 09 October 2025 10:25:08 +0000 (0:00:00.454) 0:03:58.993 ****** 2025-10-09 10:32:53.404504 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.404508 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.404512 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.404516 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.404519 | orchestrator | 2025-10-09 10:32:53.404523 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-10-09 10:32:53.404530 | orchestrator | Thursday 09 October 2025 10:25:09 +0000 (0:00:01.336) 0:04:00.330 ****** 2025-10-09 10:32:53.404534 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.404538 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.404542 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.404545 | orchestrator | 2025-10-09 10:32:53.404549 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-10-09 10:32:53.404553 | orchestrator | Thursday 09 October 2025 10:25:09 +0000 (0:00:00.268) 0:04:00.599 ****** 2025-10-09 10:32:53.404557 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.404561 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.404564 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.404568 | orchestrator | 2025-10-09 10:32:53.404572 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-10-09 10:32:53.404576 | orchestrator | Thursday 09 October 2025 10:25:11 +0000 (0:00:02.086) 0:04:02.686 ****** 2025-10-09 10:32:53.404579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:32:53.404583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:32:53.404587 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:32:53.404591 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.404595 | orchestrator | 2025-10-09 10:32:53.404598 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-10-09 10:32:53.404602 | orchestrator | Thursday 09 October 2025 10:25:12 +0000 (0:00:00.648) 0:04:03.334 ****** 2025-10-09 10:32:53.404606 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.404610 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.404614 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.404617 | orchestrator | 2025-10-09 10:32:53.404621 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-10-09 10:32:53.404625 | orchestrator | Thursday 09 October 2025 10:25:13 +0000 (0:00:00.666) 0:04:04.001 ****** 2025-10-09 10:32:53.404629 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.404633 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.404636 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.404640 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.404644 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.404648 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.404652 | orchestrator | 2025-10-09 10:32:53.404655 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-10-09 10:32:53.404659 | orchestrator | Thursday 09 October 2025 10:25:13 +0000 (0:00:00.574) 0:04:04.575 ****** 2025-10-09 10:32:53.404665 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.404669 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.404673 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.404677 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:32:53.404680 | orchestrator | 2025-10-09 10:32:53.404684 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-10-09 10:32:53.404688 | orchestrator | Thursday 09 October 2025 10:25:14 +0000 (0:00:01.006) 0:04:05.582 ****** 2025-10-09 10:32:53.404692 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.404695 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.404699 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.404703 | orchestrator | 2025-10-09 10:32:53.404707 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-10-09 10:32:53.404710 | orchestrator | Thursday 09 October 2025 10:25:15 +0000 (0:00:00.413) 0:04:05.995 ****** 2025-10-09 10:32:53.404714 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.404718 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.404721 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.404725 | orchestrator | 2025-10-09 10:32:53.404729 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-10-09 10:32:53.404733 | orchestrator | Thursday 09 October 2025 10:25:16 +0000 (0:00:01.590) 0:04:07.585 ****** 2025-10-09 10:32:53.404739 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-10-09 10:32:53.404743 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-10-09 10:32:53.404747 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-10-09 10:32:53.404750 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.404754 | orchestrator | 2025-10-09 10:32:53.404758 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-10-09 10:32:53.404762 | orchestrator | Thursday 09 October 2025 10:25:17 +0000 (0:00:00.591) 0:04:08.177 ****** 2025-10-09 10:32:53.404765 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.404769 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.404773 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.404777 | orchestrator | 2025-10-09 10:32:53.404780 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-10-09 10:32:53.404784 | orchestrator | 2025-10-09 10:32:53.404789 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-10-09 10:32:53.404793 | orchestrator | Thursday 09 October 2025 10:25:18 +0000 (0:00:00.591) 0:04:08.769 ****** 2025-10-09 10:32:53.404797 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:32:53.404801 | orchestrator | 2025-10-09 10:32:53.404805 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-10-09 10:32:53.404808 | orchestrator | Thursday 09 October 2025 10:25:18 +0000 (0:00:00.790) 0:04:09.559 ****** 2025-10-09 10:32:53.404812 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:32:53.404816 | orchestrator | 2025-10-09 10:32:53.404820 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-10-09 10:32:53.404823 | orchestrator | Thursday 09 October 2025 10:25:19 +0000 (0:00:00.693) 0:04:10.253 ****** 2025-10-09 10:32:53.404827 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.404831 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.404835 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.404838 | orchestrator | 2025-10-09 10:32:53.404842 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-10-09 10:32:53.404846 | orchestrator | Thursday 09 October 2025 10:25:20 +0000 (0:00:00.968) 0:04:11.221 ****** 2025-10-09 10:32:53.404850 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.404853 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.404857 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.404861 | orchestrator | 2025-10-09 10:32:53.404865 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-10-09 10:32:53.404868 | orchestrator | Thursday 09 October 2025 10:25:21 +0000 (0:00:00.830) 0:04:12.052 ****** 2025-10-09 10:32:53.404872 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.404876 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.404879 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.404883 | orchestrator | 2025-10-09 10:32:53.404887 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-10-09 10:32:53.404891 | orchestrator | Thursday 09 October 2025 10:25:21 +0000 (0:00:00.389) 0:04:12.442 ****** 2025-10-09 10:32:53.404894 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.404898 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.404902 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.404906 | orchestrator | 2025-10-09 10:32:53.404909 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-10-09 10:32:53.404913 | orchestrator | Thursday 09 October 2025 10:25:22 +0000 (0:00:00.449) 0:04:12.891 ****** 2025-10-09 10:32:53.404917 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.404920 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.404924 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.404928 | orchestrator | 2025-10-09 10:32:53.404935 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-10-09 10:32:53.404938 | orchestrator | Thursday 09 October 2025 10:25:23 +0000 (0:00:00.835) 0:04:13.727 ****** 2025-10-09 10:32:53.404942 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.404946 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.404950 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.404953 | orchestrator | 2025-10-09 10:32:53.404957 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-10-09 10:32:53.404961 | orchestrator | Thursday 09 October 2025 10:25:23 +0000 (0:00:00.549) 0:04:14.277 ****** 2025-10-09 10:32:53.404965 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.404968 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.404972 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.404976 | orchestrator | 2025-10-09 10:32:53.404980 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-10-09 10:32:53.404985 | orchestrator | Thursday 09 October 2025 10:25:23 +0000 (0:00:00.408) 0:04:14.685 ****** 2025-10-09 10:32:53.404989 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.404993 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.404997 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.405001 | orchestrator | 2025-10-09 10:32:53.405004 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-10-09 10:32:53.405008 | orchestrator | Thursday 09 October 2025 10:25:24 +0000 (0:00:00.872) 0:04:15.558 ****** 2025-10-09 10:32:53.405012 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.405016 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.405019 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.405023 | orchestrator | 2025-10-09 10:32:53.405027 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-10-09 10:32:53.405030 | orchestrator | Thursday 09 October 2025 10:25:25 +0000 (0:00:00.760) 0:04:16.319 ****** 2025-10-09 10:32:53.405034 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.405038 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.405042 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.405045 | orchestrator | 2025-10-09 10:32:53.405049 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-10-09 10:32:53.405053 | orchestrator | Thursday 09 October 2025 10:25:26 +0000 (0:00:00.490) 0:04:16.810 ****** 2025-10-09 10:32:53.405057 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.405060 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.405064 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.405068 | orchestrator | 2025-10-09 10:32:53.405071 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-10-09 10:32:53.405075 | orchestrator | Thursday 09 October 2025 10:25:26 +0000 (0:00:00.287) 0:04:17.097 ****** 2025-10-09 10:32:53.405079 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.405083 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.405086 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.405090 | orchestrator | 2025-10-09 10:32:53.405094 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-10-09 10:32:53.405098 | orchestrator | Thursday 09 October 2025 10:25:26 +0000 (0:00:00.270) 0:04:17.367 ****** 2025-10-09 10:32:53.405101 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.405105 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.405109 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.405113 | orchestrator | 2025-10-09 10:32:53.405119 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-10-09 10:32:53.405123 | orchestrator | Thursday 09 October 2025 10:25:27 +0000 (0:00:00.504) 0:04:17.872 ****** 2025-10-09 10:32:53.405126 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.405130 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.405134 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.405138 | orchestrator | 2025-10-09 10:32:53.405141 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-10-09 10:32:53.405147 | orchestrator | Thursday 09 October 2025 10:25:27 +0000 (0:00:00.320) 0:04:18.192 ****** 2025-10-09 10:32:53.405151 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.405155 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.405158 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.405162 | orchestrator | 2025-10-09 10:32:53.405166 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-10-09 10:32:53.405170 | orchestrator | Thursday 09 October 2025 10:25:28 +0000 (0:00:00.528) 0:04:18.721 ****** 2025-10-09 10:32:53.405173 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.405177 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.405181 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.405184 | orchestrator | 2025-10-09 10:32:53.405188 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-10-09 10:32:53.405192 | orchestrator | Thursday 09 October 2025 10:25:28 +0000 (0:00:00.366) 0:04:19.088 ****** 2025-10-09 10:32:53.405196 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.405199 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.405203 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.405228 | orchestrator | 2025-10-09 10:32:53.405232 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-10-09 10:32:53.405236 | orchestrator | Thursday 09 October 2025 10:25:28 +0000 (0:00:00.383) 0:04:19.471 ****** 2025-10-09 10:32:53.405240 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.405244 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.405247 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.405251 | orchestrator | 2025-10-09 10:32:53.405255 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-10-09 10:32:53.405259 | orchestrator | Thursday 09 October 2025 10:25:29 +0000 (0:00:00.433) 0:04:19.904 ****** 2025-10-09 10:32:53.405262 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.405266 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.405270 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.405273 | orchestrator | 2025-10-09 10:32:53.405277 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-10-09 10:32:53.405281 | orchestrator | Thursday 09 October 2025 10:25:30 +0000 (0:00:00.851) 0:04:20.755 ****** 2025-10-09 10:32:53.405284 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.405288 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.405292 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.405296 | orchestrator | 2025-10-09 10:32:53.405299 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-10-09 10:32:53.405303 | orchestrator | Thursday 09 October 2025 10:25:30 +0000 (0:00:00.378) 0:04:21.134 ****** 2025-10-09 10:32:53.405307 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:32:53.405311 | orchestrator | 2025-10-09 10:32:53.405314 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-10-09 10:32:53.405318 | orchestrator | Thursday 09 October 2025 10:25:31 +0000 (0:00:00.980) 0:04:22.114 ****** 2025-10-09 10:32:53.405322 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.405325 | orchestrator | 2025-10-09 10:32:53.405329 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-10-09 10:32:53.405333 | orchestrator | Thursday 09 October 2025 10:25:31 +0000 (0:00:00.150) 0:04:22.265 ****** 2025-10-09 10:32:53.405337 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-10-09 10:32:53.405340 | orchestrator | 2025-10-09 10:32:53.405347 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-10-09 10:32:53.405351 | orchestrator | Thursday 09 October 2025 10:25:32 +0000 (0:00:01.169) 0:04:23.435 ****** 2025-10-09 10:32:53.405354 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.405358 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.405362 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.405365 | orchestrator | 2025-10-09 10:32:53.405369 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-10-09 10:32:53.405376 | orchestrator | Thursday 09 October 2025 10:25:33 +0000 (0:00:00.369) 0:04:23.804 ****** 2025-10-09 10:32:53.405379 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.405383 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.405387 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.405390 | orchestrator | 2025-10-09 10:32:53.405394 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-10-09 10:32:53.405398 | orchestrator | Thursday 09 October 2025 10:25:33 +0000 (0:00:00.369) 0:04:24.174 ****** 2025-10-09 10:32:53.405402 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.405406 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.405409 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.405413 | orchestrator | 2025-10-09 10:32:53.405417 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-10-09 10:32:53.405420 | orchestrator | Thursday 09 October 2025 10:25:34 +0000 (0:00:01.288) 0:04:25.463 ****** 2025-10-09 10:32:53.405424 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.405428 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.405432 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.405435 | orchestrator | 2025-10-09 10:32:53.405439 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-10-09 10:32:53.405443 | orchestrator | Thursday 09 October 2025 10:25:35 +0000 (0:00:01.170) 0:04:26.634 ****** 2025-10-09 10:32:53.405447 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.405450 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.405454 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.405458 | orchestrator | 2025-10-09 10:32:53.405461 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-10-09 10:32:53.405465 | orchestrator | Thursday 09 October 2025 10:25:36 +0000 (0:00:00.762) 0:04:27.396 ****** 2025-10-09 10:32:53.405469 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.405474 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.405478 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.405482 | orchestrator | 2025-10-09 10:32:53.405486 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-10-09 10:32:53.405489 | orchestrator | Thursday 09 October 2025 10:25:37 +0000 (0:00:00.749) 0:04:28.146 ****** 2025-10-09 10:32:53.405493 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.405497 | orchestrator | 2025-10-09 10:32:53.405501 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-10-09 10:32:53.405504 | orchestrator | Thursday 09 October 2025 10:25:38 +0000 (0:00:01.313) 0:04:29.459 ****** 2025-10-09 10:32:53.405508 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.405512 | orchestrator | 2025-10-09 10:32:53.405516 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-10-09 10:32:53.405519 | orchestrator | Thursday 09 October 2025 10:25:39 +0000 (0:00:00.746) 0:04:30.206 ****** 2025-10-09 10:32:53.405523 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-09 10:32:53.405527 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:32:53.405531 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:32:53.405534 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-09 10:32:53.405538 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-10-09 10:32:53.405542 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-09 10:32:53.405546 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-09 10:32:53.405549 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-10-09 10:32:53.405553 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-09 10:32:53.405557 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-10-09 10:32:53.405561 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-10-09 10:32:53.405564 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-10-09 10:32:53.405570 | orchestrator | 2025-10-09 10:32:53.405574 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-10-09 10:32:53.405578 | orchestrator | Thursday 09 October 2025 10:25:43 +0000 (0:00:03.787) 0:04:33.993 ****** 2025-10-09 10:32:53.405582 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.405585 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.405589 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.405593 | orchestrator | 2025-10-09 10:32:53.405596 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-10-09 10:32:53.405600 | orchestrator | Thursday 09 October 2025 10:25:44 +0000 (0:00:01.211) 0:04:35.204 ****** 2025-10-09 10:32:53.405604 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.405608 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.405611 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.405615 | orchestrator | 2025-10-09 10:32:53.405619 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-10-09 10:32:53.405623 | orchestrator | Thursday 09 October 2025 10:25:44 +0000 (0:00:00.323) 0:04:35.528 ****** 2025-10-09 10:32:53.405626 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.405630 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.405634 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.405637 | orchestrator | 2025-10-09 10:32:53.405641 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-10-09 10:32:53.405645 | orchestrator | Thursday 09 October 2025 10:25:45 +0000 (0:00:00.341) 0:04:35.870 ****** 2025-10-09 10:32:53.405649 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.405652 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.405656 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.405660 | orchestrator | 2025-10-09 10:32:53.405664 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-10-09 10:32:53.405670 | orchestrator | Thursday 09 October 2025 10:25:47 +0000 (0:00:02.514) 0:04:38.385 ****** 2025-10-09 10:32:53.405674 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.405677 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.405681 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.405685 | orchestrator | 2025-10-09 10:32:53.405689 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-10-09 10:32:53.405692 | orchestrator | Thursday 09 October 2025 10:25:48 +0000 (0:00:01.271) 0:04:39.657 ****** 2025-10-09 10:32:53.405696 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.405700 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.405704 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.405707 | orchestrator | 2025-10-09 10:32:53.405711 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-10-09 10:32:53.405715 | orchestrator | Thursday 09 October 2025 10:25:49 +0000 (0:00:00.379) 0:04:40.036 ****** 2025-10-09 10:32:53.405719 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:32:53.405722 | orchestrator | 2025-10-09 10:32:53.405726 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-10-09 10:32:53.405730 | orchestrator | Thursday 09 October 2025 10:25:49 +0000 (0:00:00.568) 0:04:40.604 ****** 2025-10-09 10:32:53.405733 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.405737 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.405741 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.405745 | orchestrator | 2025-10-09 10:32:53.405748 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-10-09 10:32:53.405752 | orchestrator | Thursday 09 October 2025 10:25:50 +0000 (0:00:00.690) 0:04:41.295 ****** 2025-10-09 10:32:53.405756 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.405760 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.405763 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.405767 | orchestrator | 2025-10-09 10:32:53.405771 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-10-09 10:32:53.405777 | orchestrator | Thursday 09 October 2025 10:25:50 +0000 (0:00:00.341) 0:04:41.637 ****** 2025-10-09 10:32:53.405782 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:32:53.405786 | orchestrator | 2025-10-09 10:32:53.405790 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-10-09 10:32:53.405793 | orchestrator | Thursday 09 October 2025 10:25:51 +0000 (0:00:00.559) 0:04:42.196 ****** 2025-10-09 10:32:53.405797 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.405801 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.405804 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.405808 | orchestrator | 2025-10-09 10:32:53.405812 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-10-09 10:32:53.405816 | orchestrator | Thursday 09 October 2025 10:25:53 +0000 (0:00:02.080) 0:04:44.277 ****** 2025-10-09 10:32:53.405819 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.405823 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.405827 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.405831 | orchestrator | 2025-10-09 10:32:53.405834 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-10-09 10:32:53.405838 | orchestrator | Thursday 09 October 2025 10:25:54 +0000 (0:00:01.143) 0:04:45.421 ****** 2025-10-09 10:32:53.405842 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.405846 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.405849 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.405853 | orchestrator | 2025-10-09 10:32:53.405857 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-10-09 10:32:53.405860 | orchestrator | Thursday 09 October 2025 10:25:56 +0000 (0:00:01.926) 0:04:47.348 ****** 2025-10-09 10:32:53.405864 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.405868 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.405872 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.405875 | orchestrator | 2025-10-09 10:32:53.405879 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-10-09 10:32:53.405883 | orchestrator | Thursday 09 October 2025 10:25:58 +0000 (0:00:01.825) 0:04:49.173 ****** 2025-10-09 10:32:53.405886 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:32:53.405890 | orchestrator | 2025-10-09 10:32:53.405894 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-10-09 10:32:53.405898 | orchestrator | Thursday 09 October 2025 10:25:59 +0000 (0:00:00.929) 0:04:50.103 ****** 2025-10-09 10:32:53.405901 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-10-09 10:32:53.405905 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.405909 | orchestrator | 2025-10-09 10:32:53.405913 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-10-09 10:32:53.405916 | orchestrator | Thursday 09 October 2025 10:26:21 +0000 (0:00:21.761) 0:05:11.865 ****** 2025-10-09 10:32:53.405920 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.405924 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.405927 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.405931 | orchestrator | 2025-10-09 10:32:53.405935 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-10-09 10:32:53.405939 | orchestrator | Thursday 09 October 2025 10:26:30 +0000 (0:00:09.527) 0:05:21.392 ****** 2025-10-09 10:32:53.405942 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.405946 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.405950 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.405954 | orchestrator | 2025-10-09 10:32:53.405957 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-10-09 10:32:53.405961 | orchestrator | Thursday 09 October 2025 10:26:31 +0000 (0:00:00.337) 0:05:21.730 ****** 2025-10-09 10:32:53.405968 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__92144ac8850f4dd1f438996291be7c3e41b0eaa2'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-10-09 10:32:53.405975 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__92144ac8850f4dd1f438996291be7c3e41b0eaa2'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-10-09 10:32:53.405979 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__92144ac8850f4dd1f438996291be7c3e41b0eaa2'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-10-09 10:32:53.405984 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__92144ac8850f4dd1f438996291be7c3e41b0eaa2'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-10-09 10:32:53.405989 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__92144ac8850f4dd1f438996291be7c3e41b0eaa2'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-10-09 10:32:53.405994 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__92144ac8850f4dd1f438996291be7c3e41b0eaa2'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__92144ac8850f4dd1f438996291be7c3e41b0eaa2'}])  2025-10-09 10:32:53.405998 | orchestrator | 2025-10-09 10:32:53.406002 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-10-09 10:32:53.406006 | orchestrator | Thursday 09 October 2025 10:26:44 +0000 (0:00:13.894) 0:05:35.624 ****** 2025-10-09 10:32:53.406009 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.406013 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.406037 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.406041 | orchestrator | 2025-10-09 10:32:53.406045 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-10-09 10:32:53.406048 | orchestrator | Thursday 09 October 2025 10:26:45 +0000 (0:00:00.422) 0:05:36.047 ****** 2025-10-09 10:32:53.406052 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-1, testbed-node-2, testbed-node-0 2025-10-09 10:32:53.406056 | orchestrator | 2025-10-09 10:32:53.406060 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-10-09 10:32:53.406064 | orchestrator | Thursday 09 October 2025 10:26:46 +0000 (0:00:00.903) 0:05:36.951 ****** 2025-10-09 10:32:53.406068 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.406072 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.406075 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.406079 | orchestrator | 2025-10-09 10:32:53.406083 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-10-09 10:32:53.406087 | orchestrator | Thursday 09 October 2025 10:26:46 +0000 (0:00:00.377) 0:05:37.329 ****** 2025-10-09 10:32:53.406091 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.406094 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.406101 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.406105 | orchestrator | 2025-10-09 10:32:53.406108 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-10-09 10:32:53.406112 | orchestrator | Thursday 09 October 2025 10:26:46 +0000 (0:00:00.355) 0:05:37.685 ****** 2025-10-09 10:32:53.406116 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-10-09 10:32:53.406120 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-10-09 10:32:53.406124 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-10-09 10:32:53.406127 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.406131 | orchestrator | 2025-10-09 10:32:53.406135 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-10-09 10:32:53.406139 | orchestrator | Thursday 09 October 2025 10:26:47 +0000 (0:00:00.938) 0:05:38.623 ****** 2025-10-09 10:32:53.406142 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.406146 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.406150 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.406154 | orchestrator | 2025-10-09 10:32:53.406158 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-10-09 10:32:53.406161 | orchestrator | 2025-10-09 10:32:53.406165 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-10-09 10:32:53.406201 | orchestrator | Thursday 09 October 2025 10:26:48 +0000 (0:00:00.927) 0:05:39.550 ****** 2025-10-09 10:32:53.406205 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:32:53.406218 | orchestrator | 2025-10-09 10:32:53.406222 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-10-09 10:32:53.406225 | orchestrator | Thursday 09 October 2025 10:26:49 +0000 (0:00:00.617) 0:05:40.168 ****** 2025-10-09 10:32:53.406229 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:32:53.406233 | orchestrator | 2025-10-09 10:32:53.406237 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-10-09 10:32:53.406241 | orchestrator | Thursday 09 October 2025 10:26:50 +0000 (0:00:00.799) 0:05:40.968 ****** 2025-10-09 10:32:53.406244 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.406248 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.406252 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.406256 | orchestrator | 2025-10-09 10:32:53.406259 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-10-09 10:32:53.406263 | orchestrator | Thursday 09 October 2025 10:26:50 +0000 (0:00:00.698) 0:05:41.666 ****** 2025-10-09 10:32:53.406267 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.406271 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.406275 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.406278 | orchestrator | 2025-10-09 10:32:53.406282 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-10-09 10:32:53.406286 | orchestrator | Thursday 09 October 2025 10:26:51 +0000 (0:00:00.352) 0:05:42.019 ****** 2025-10-09 10:32:53.406290 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.406293 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.406297 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.406301 | orchestrator | 2025-10-09 10:32:53.406305 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-10-09 10:32:53.406308 | orchestrator | Thursday 09 October 2025 10:26:51 +0000 (0:00:00.381) 0:05:42.400 ****** 2025-10-09 10:32:53.406312 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.406316 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.406320 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.406323 | orchestrator | 2025-10-09 10:32:53.406327 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-10-09 10:32:53.406331 | orchestrator | Thursday 09 October 2025 10:26:52 +0000 (0:00:00.591) 0:05:42.992 ****** 2025-10-09 10:32:53.406338 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.406342 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.406346 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.406349 | orchestrator | 2025-10-09 10:32:53.406353 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-10-09 10:32:53.406357 | orchestrator | Thursday 09 October 2025 10:26:52 +0000 (0:00:00.711) 0:05:43.703 ****** 2025-10-09 10:32:53.406361 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.406365 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.406368 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.406372 | orchestrator | 2025-10-09 10:32:53.406376 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-10-09 10:32:53.406380 | orchestrator | Thursday 09 October 2025 10:26:53 +0000 (0:00:00.346) 0:05:44.049 ****** 2025-10-09 10:32:53.406383 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.406387 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.406391 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.406395 | orchestrator | 2025-10-09 10:32:53.406398 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-10-09 10:32:53.406402 | orchestrator | Thursday 09 October 2025 10:26:53 +0000 (0:00:00.377) 0:05:44.427 ****** 2025-10-09 10:32:53.406406 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.406410 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.406413 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.406417 | orchestrator | 2025-10-09 10:32:53.406421 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-10-09 10:32:53.406425 | orchestrator | Thursday 09 October 2025 10:26:54 +0000 (0:00:00.707) 0:05:45.135 ****** 2025-10-09 10:32:53.406429 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.406432 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.406436 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.406440 | orchestrator | 2025-10-09 10:32:53.406444 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-10-09 10:32:53.406447 | orchestrator | Thursday 09 October 2025 10:26:55 +0000 (0:00:01.129) 0:05:46.265 ****** 2025-10-09 10:32:53.406451 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.406455 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.406459 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.406463 | orchestrator | 2025-10-09 10:32:53.406466 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-10-09 10:32:53.406470 | orchestrator | Thursday 09 October 2025 10:26:55 +0000 (0:00:00.359) 0:05:46.624 ****** 2025-10-09 10:32:53.406485 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.406490 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.406493 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.406497 | orchestrator | 2025-10-09 10:32:53.406501 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-10-09 10:32:53.406505 | orchestrator | Thursday 09 October 2025 10:26:56 +0000 (0:00:00.357) 0:05:46.981 ****** 2025-10-09 10:32:53.406508 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.406512 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.406516 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.406520 | orchestrator | 2025-10-09 10:32:53.406523 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-10-09 10:32:53.406527 | orchestrator | Thursday 09 October 2025 10:26:56 +0000 (0:00:00.336) 0:05:47.318 ****** 2025-10-09 10:32:53.406531 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.406535 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.406538 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.406542 | orchestrator | 2025-10-09 10:32:53.406546 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-10-09 10:32:53.406557 | orchestrator | Thursday 09 October 2025 10:26:57 +0000 (0:00:00.608) 0:05:47.927 ****** 2025-10-09 10:32:53.406561 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.406567 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.406571 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.406575 | orchestrator | 2025-10-09 10:32:53.406579 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-10-09 10:32:53.406582 | orchestrator | Thursday 09 October 2025 10:26:57 +0000 (0:00:00.352) 0:05:48.279 ****** 2025-10-09 10:32:53.406586 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.406590 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.406594 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.406597 | orchestrator | 2025-10-09 10:32:53.406601 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-10-09 10:32:53.406605 | orchestrator | Thursday 09 October 2025 10:26:57 +0000 (0:00:00.372) 0:05:48.652 ****** 2025-10-09 10:32:53.406609 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.406612 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.406616 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.406620 | orchestrator | 2025-10-09 10:32:53.406623 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-10-09 10:32:53.406627 | orchestrator | Thursday 09 October 2025 10:26:58 +0000 (0:00:00.333) 0:05:48.986 ****** 2025-10-09 10:32:53.406631 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.406635 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.406639 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.406642 | orchestrator | 2025-10-09 10:32:53.406646 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-10-09 10:32:53.406650 | orchestrator | Thursday 09 October 2025 10:26:58 +0000 (0:00:00.646) 0:05:49.632 ****** 2025-10-09 10:32:53.406654 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.406657 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.406661 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.406665 | orchestrator | 2025-10-09 10:32:53.406669 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-10-09 10:32:53.406672 | orchestrator | Thursday 09 October 2025 10:26:59 +0000 (0:00:00.394) 0:05:50.026 ****** 2025-10-09 10:32:53.406676 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.406680 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.406686 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.406690 | orchestrator | 2025-10-09 10:32:53.406694 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-10-09 10:32:53.406698 | orchestrator | Thursday 09 October 2025 10:26:59 +0000 (0:00:00.580) 0:05:50.607 ****** 2025-10-09 10:32:53.406701 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-09 10:32:53.406705 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-09 10:32:53.406709 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-09 10:32:53.406713 | orchestrator | 2025-10-09 10:32:53.406716 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-10-09 10:32:53.406720 | orchestrator | Thursday 09 October 2025 10:27:00 +0000 (0:00:01.011) 0:05:51.618 ****** 2025-10-09 10:32:53.406724 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:32:53.406728 | orchestrator | 2025-10-09 10:32:53.406732 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-10-09 10:32:53.406735 | orchestrator | Thursday 09 October 2025 10:27:01 +0000 (0:00:00.893) 0:05:52.512 ****** 2025-10-09 10:32:53.406739 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.406743 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.406746 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.406750 | orchestrator | 2025-10-09 10:32:53.406754 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-10-09 10:32:53.406758 | orchestrator | Thursday 09 October 2025 10:27:02 +0000 (0:00:00.708) 0:05:53.221 ****** 2025-10-09 10:32:53.406761 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.406765 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.406772 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.406775 | orchestrator | 2025-10-09 10:32:53.406779 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-10-09 10:32:53.406783 | orchestrator | Thursday 09 October 2025 10:27:02 +0000 (0:00:00.313) 0:05:53.535 ****** 2025-10-09 10:32:53.406787 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-09 10:32:53.406790 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-09 10:32:53.406794 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-09 10:32:53.406798 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-10-09 10:32:53.406802 | orchestrator | 2025-10-09 10:32:53.406805 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-10-09 10:32:53.406809 | orchestrator | Thursday 09 October 2025 10:27:13 +0000 (0:00:10.699) 0:06:04.234 ****** 2025-10-09 10:32:53.406813 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.406817 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.406820 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.406824 | orchestrator | 2025-10-09 10:32:53.406828 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-10-09 10:32:53.406832 | orchestrator | Thursday 09 October 2025 10:27:14 +0000 (0:00:00.673) 0:06:04.908 ****** 2025-10-09 10:32:53.406835 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-10-09 10:32:53.406839 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-10-09 10:32:53.406843 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-10-09 10:32:53.406847 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-10-09 10:32:53.406850 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:32:53.406854 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:32:53.406858 | orchestrator | 2025-10-09 10:32:53.406862 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-10-09 10:32:53.406865 | orchestrator | Thursday 09 October 2025 10:27:16 +0000 (0:00:02.172) 0:06:07.080 ****** 2025-10-09 10:32:53.406872 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-10-09 10:32:53.406876 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-10-09 10:32:53.406880 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-10-09 10:32:53.406884 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-09 10:32:53.406887 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-10-09 10:32:53.406891 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-10-09 10:32:53.406895 | orchestrator | 2025-10-09 10:32:53.406899 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-10-09 10:32:53.406902 | orchestrator | Thursday 09 October 2025 10:27:17 +0000 (0:00:01.253) 0:06:08.334 ****** 2025-10-09 10:32:53.406906 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.406910 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.406914 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.406917 | orchestrator | 2025-10-09 10:32:53.406921 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-10-09 10:32:53.406925 | orchestrator | Thursday 09 October 2025 10:27:18 +0000 (0:00:00.716) 0:06:09.051 ****** 2025-10-09 10:32:53.406929 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.406933 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.406937 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.406940 | orchestrator | 2025-10-09 10:32:53.406944 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-10-09 10:32:53.406948 | orchestrator | Thursday 09 October 2025 10:27:19 +0000 (0:00:00.729) 0:06:09.780 ****** 2025-10-09 10:32:53.406952 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.406955 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.406959 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.406963 | orchestrator | 2025-10-09 10:32:53.406966 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-10-09 10:32:53.406974 | orchestrator | Thursday 09 October 2025 10:27:19 +0000 (0:00:00.314) 0:06:10.094 ****** 2025-10-09 10:32:53.406978 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:32:53.406982 | orchestrator | 2025-10-09 10:32:53.406986 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-10-09 10:32:53.406992 | orchestrator | Thursday 09 October 2025 10:27:19 +0000 (0:00:00.547) 0:06:10.642 ****** 2025-10-09 10:32:53.406996 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.406999 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.407003 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.407007 | orchestrator | 2025-10-09 10:32:53.407011 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-10-09 10:32:53.407014 | orchestrator | Thursday 09 October 2025 10:27:20 +0000 (0:00:00.627) 0:06:11.270 ****** 2025-10-09 10:32:53.407018 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.407022 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.407026 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.407029 | orchestrator | 2025-10-09 10:32:53.407033 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-10-09 10:32:53.407037 | orchestrator | Thursday 09 October 2025 10:27:20 +0000 (0:00:00.365) 0:06:11.635 ****** 2025-10-09 10:32:53.407041 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:32:53.407045 | orchestrator | 2025-10-09 10:32:53.407048 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-10-09 10:32:53.407052 | orchestrator | Thursday 09 October 2025 10:27:21 +0000 (0:00:00.706) 0:06:12.342 ****** 2025-10-09 10:32:53.407056 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.407060 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.407063 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.407067 | orchestrator | 2025-10-09 10:32:53.407071 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-10-09 10:32:53.407075 | orchestrator | Thursday 09 October 2025 10:27:23 +0000 (0:00:01.710) 0:06:14.052 ****** 2025-10-09 10:32:53.407078 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.407082 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.407086 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.407090 | orchestrator | 2025-10-09 10:32:53.407093 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-10-09 10:32:53.407097 | orchestrator | Thursday 09 October 2025 10:27:24 +0000 (0:00:01.261) 0:06:15.314 ****** 2025-10-09 10:32:53.407101 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.407105 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.407108 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.407112 | orchestrator | 2025-10-09 10:32:53.407116 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-10-09 10:32:53.407120 | orchestrator | Thursday 09 October 2025 10:27:26 +0000 (0:00:01.849) 0:06:17.163 ****** 2025-10-09 10:32:53.407123 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.407127 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.407131 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.407134 | orchestrator | 2025-10-09 10:32:53.407138 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-10-09 10:32:53.407142 | orchestrator | Thursday 09 October 2025 10:27:28 +0000 (0:00:02.077) 0:06:19.240 ****** 2025-10-09 10:32:53.407146 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.407149 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.407153 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-10-09 10:32:53.407157 | orchestrator | 2025-10-09 10:32:53.407161 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-10-09 10:32:53.407164 | orchestrator | Thursday 09 October 2025 10:27:29 +0000 (0:00:00.745) 0:06:19.986 ****** 2025-10-09 10:32:53.407172 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-10-09 10:32:53.407176 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-10-09 10:32:53.407182 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-10-09 10:32:53.407186 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-10-09 10:32:53.407189 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-10-09 10:32:53.407193 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:32:53.407197 | orchestrator | 2025-10-09 10:32:53.407201 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-10-09 10:32:53.407205 | orchestrator | Thursday 09 October 2025 10:27:59 +0000 (0:00:30.147) 0:06:50.134 ****** 2025-10-09 10:32:53.407218 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:32:53.407221 | orchestrator | 2025-10-09 10:32:53.407225 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-10-09 10:32:53.407229 | orchestrator | Thursday 09 October 2025 10:28:00 +0000 (0:00:01.347) 0:06:51.481 ****** 2025-10-09 10:32:53.407233 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.407236 | orchestrator | 2025-10-09 10:32:53.407240 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-10-09 10:32:53.407244 | orchestrator | Thursday 09 October 2025 10:28:01 +0000 (0:00:00.530) 0:06:52.012 ****** 2025-10-09 10:32:53.407248 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.407252 | orchestrator | 2025-10-09 10:32:53.407256 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-10-09 10:32:53.407259 | orchestrator | Thursday 09 October 2025 10:28:01 +0000 (0:00:00.189) 0:06:52.201 ****** 2025-10-09 10:32:53.407263 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-10-09 10:32:53.407267 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-10-09 10:32:53.407271 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-10-09 10:32:53.407274 | orchestrator | 2025-10-09 10:32:53.407278 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-10-09 10:32:53.407285 | orchestrator | Thursday 09 October 2025 10:28:07 +0000 (0:00:06.389) 0:06:58.591 ****** 2025-10-09 10:32:53.407288 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-10-09 10:32:53.407292 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-10-09 10:32:53.407296 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-10-09 10:32:53.407300 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-10-09 10:32:53.407304 | orchestrator | 2025-10-09 10:32:53.407308 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-10-09 10:32:53.407311 | orchestrator | Thursday 09 October 2025 10:28:12 +0000 (0:00:05.094) 0:07:03.685 ****** 2025-10-09 10:32:53.407315 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.407319 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.407323 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.407326 | orchestrator | 2025-10-09 10:32:53.407330 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-10-09 10:32:53.407334 | orchestrator | Thursday 09 October 2025 10:28:13 +0000 (0:00:00.719) 0:07:04.404 ****** 2025-10-09 10:32:53.407338 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:32:53.407342 | orchestrator | 2025-10-09 10:32:53.407345 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-10-09 10:32:53.407349 | orchestrator | Thursday 09 October 2025 10:28:14 +0000 (0:00:00.616) 0:07:05.021 ****** 2025-10-09 10:32:53.407356 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.407360 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.407364 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.407367 | orchestrator | 2025-10-09 10:32:53.407371 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-10-09 10:32:53.407375 | orchestrator | Thursday 09 October 2025 10:28:14 +0000 (0:00:00.351) 0:07:05.373 ****** 2025-10-09 10:32:53.407379 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.407382 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.407386 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.407390 | orchestrator | 2025-10-09 10:32:53.407394 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-10-09 10:32:53.407397 | orchestrator | Thursday 09 October 2025 10:28:16 +0000 (0:00:01.477) 0:07:06.850 ****** 2025-10-09 10:32:53.407401 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-10-09 10:32:53.407405 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-10-09 10:32:53.407409 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-10-09 10:32:53.407413 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.407416 | orchestrator | 2025-10-09 10:32:53.407420 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-10-09 10:32:53.407424 | orchestrator | Thursday 09 October 2025 10:28:16 +0000 (0:00:00.673) 0:07:07.524 ****** 2025-10-09 10:32:53.407428 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.407431 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.407435 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.407439 | orchestrator | 2025-10-09 10:32:53.407443 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-10-09 10:32:53.407447 | orchestrator | 2025-10-09 10:32:53.407451 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-10-09 10:32:53.407454 | orchestrator | Thursday 09 October 2025 10:28:17 +0000 (0:00:00.563) 0:07:08.088 ****** 2025-10-09 10:32:53.407458 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.407462 | orchestrator | 2025-10-09 10:32:53.407466 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-10-09 10:32:53.407472 | orchestrator | Thursday 09 October 2025 10:28:18 +0000 (0:00:00.826) 0:07:08.915 ****** 2025-10-09 10:32:53.407476 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.407480 | orchestrator | 2025-10-09 10:32:53.407484 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-10-09 10:32:53.407488 | orchestrator | Thursday 09 October 2025 10:28:18 +0000 (0:00:00.567) 0:07:09.482 ****** 2025-10-09 10:32:53.407491 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.407495 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.407499 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.407503 | orchestrator | 2025-10-09 10:32:53.407506 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-10-09 10:32:53.407510 | orchestrator | Thursday 09 October 2025 10:28:19 +0000 (0:00:00.595) 0:07:10.078 ****** 2025-10-09 10:32:53.407514 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.407518 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.407521 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.407525 | orchestrator | 2025-10-09 10:32:53.407529 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-10-09 10:32:53.407533 | orchestrator | Thursday 09 October 2025 10:28:20 +0000 (0:00:00.680) 0:07:10.759 ****** 2025-10-09 10:32:53.407536 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.407540 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.407544 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.407548 | orchestrator | 2025-10-09 10:32:53.407552 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-10-09 10:32:53.407558 | orchestrator | Thursday 09 October 2025 10:28:20 +0000 (0:00:00.730) 0:07:11.489 ****** 2025-10-09 10:32:53.407562 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.407566 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.407569 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.407573 | orchestrator | 2025-10-09 10:32:53.407577 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-10-09 10:32:53.407581 | orchestrator | Thursday 09 October 2025 10:28:21 +0000 (0:00:00.718) 0:07:12.208 ****** 2025-10-09 10:32:53.407585 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.407588 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.407592 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.407596 | orchestrator | 2025-10-09 10:32:53.407602 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-10-09 10:32:53.407606 | orchestrator | Thursday 09 October 2025 10:28:22 +0000 (0:00:00.649) 0:07:12.857 ****** 2025-10-09 10:32:53.407609 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.407613 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.407617 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.407621 | orchestrator | 2025-10-09 10:32:53.407624 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-10-09 10:32:53.407628 | orchestrator | Thursday 09 October 2025 10:28:22 +0000 (0:00:00.310) 0:07:13.168 ****** 2025-10-09 10:32:53.407632 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.407636 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.407639 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.407643 | orchestrator | 2025-10-09 10:32:53.407647 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-10-09 10:32:53.407651 | orchestrator | Thursday 09 October 2025 10:28:22 +0000 (0:00:00.313) 0:07:13.481 ****** 2025-10-09 10:32:53.407654 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.407658 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.407662 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.407666 | orchestrator | 2025-10-09 10:32:53.407669 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-10-09 10:32:53.407673 | orchestrator | Thursday 09 October 2025 10:28:23 +0000 (0:00:00.654) 0:07:14.135 ****** 2025-10-09 10:32:53.407677 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.407681 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.407685 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.407689 | orchestrator | 2025-10-09 10:32:53.407692 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-10-09 10:32:53.407696 | orchestrator | Thursday 09 October 2025 10:28:24 +0000 (0:00:00.971) 0:07:15.106 ****** 2025-10-09 10:32:53.407700 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.407704 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.407707 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.407711 | orchestrator | 2025-10-09 10:32:53.407715 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-10-09 10:32:53.407719 | orchestrator | Thursday 09 October 2025 10:28:24 +0000 (0:00:00.362) 0:07:15.469 ****** 2025-10-09 10:32:53.407723 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.407726 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.407730 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.407734 | orchestrator | 2025-10-09 10:32:53.407737 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-10-09 10:32:53.407741 | orchestrator | Thursday 09 October 2025 10:28:25 +0000 (0:00:00.349) 0:07:15.818 ****** 2025-10-09 10:32:53.407745 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.407749 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.407752 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.407756 | orchestrator | 2025-10-09 10:32:53.407760 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-10-09 10:32:53.407763 | orchestrator | Thursday 09 October 2025 10:28:25 +0000 (0:00:00.344) 0:07:16.163 ****** 2025-10-09 10:32:53.407770 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.407774 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.407778 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.407781 | orchestrator | 2025-10-09 10:32:53.407785 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-10-09 10:32:53.407789 | orchestrator | Thursday 09 October 2025 10:28:25 +0000 (0:00:00.321) 0:07:16.485 ****** 2025-10-09 10:32:53.407793 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.407797 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.407800 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.407804 | orchestrator | 2025-10-09 10:32:53.407808 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-10-09 10:32:53.407811 | orchestrator | Thursday 09 October 2025 10:28:26 +0000 (0:00:00.652) 0:07:17.137 ****** 2025-10-09 10:32:53.407818 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.407822 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.407826 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.407829 | orchestrator | 2025-10-09 10:32:53.407833 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-10-09 10:32:53.407837 | orchestrator | Thursday 09 October 2025 10:28:26 +0000 (0:00:00.349) 0:07:17.487 ****** 2025-10-09 10:32:53.407841 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.407844 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.407848 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.407852 | orchestrator | 2025-10-09 10:32:53.407855 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-10-09 10:32:53.407859 | orchestrator | Thursday 09 October 2025 10:28:27 +0000 (0:00:00.324) 0:07:17.811 ****** 2025-10-09 10:32:53.407863 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.407867 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.407870 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.407874 | orchestrator | 2025-10-09 10:32:53.407878 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-10-09 10:32:53.407882 | orchestrator | Thursday 09 October 2025 10:28:27 +0000 (0:00:00.326) 0:07:18.138 ****** 2025-10-09 10:32:53.407885 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.407889 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.407893 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.407897 | orchestrator | 2025-10-09 10:32:53.407900 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-10-09 10:32:53.407904 | orchestrator | Thursday 09 October 2025 10:28:28 +0000 (0:00:00.670) 0:07:18.808 ****** 2025-10-09 10:32:53.407908 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.407912 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.407915 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.407919 | orchestrator | 2025-10-09 10:32:53.407923 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-10-09 10:32:53.407927 | orchestrator | Thursday 09 October 2025 10:28:28 +0000 (0:00:00.593) 0:07:19.402 ****** 2025-10-09 10:32:53.407930 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.407934 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.407938 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.407942 | orchestrator | 2025-10-09 10:32:53.407945 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-10-09 10:32:53.407951 | orchestrator | Thursday 09 October 2025 10:28:29 +0000 (0:00:00.361) 0:07:19.763 ****** 2025-10-09 10:32:53.407955 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-10-09 10:32:53.407959 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-09 10:32:53.407963 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-09 10:32:53.407967 | orchestrator | 2025-10-09 10:32:53.407970 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-10-09 10:32:53.407974 | orchestrator | Thursday 09 October 2025 10:28:30 +0000 (0:00:01.229) 0:07:20.992 ****** 2025-10-09 10:32:53.407980 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.407984 | orchestrator | 2025-10-09 10:32:53.407988 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-10-09 10:32:53.407992 | orchestrator | Thursday 09 October 2025 10:28:30 +0000 (0:00:00.578) 0:07:21.571 ****** 2025-10-09 10:32:53.407996 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.407999 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.408003 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.408007 | orchestrator | 2025-10-09 10:32:53.408010 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-10-09 10:32:53.408014 | orchestrator | Thursday 09 October 2025 10:28:31 +0000 (0:00:00.355) 0:07:21.926 ****** 2025-10-09 10:32:53.408018 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.408022 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.408025 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.408029 | orchestrator | 2025-10-09 10:32:53.408033 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-10-09 10:32:53.408037 | orchestrator | Thursday 09 October 2025 10:28:31 +0000 (0:00:00.674) 0:07:22.600 ****** 2025-10-09 10:32:53.408040 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.408044 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.408048 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.408052 | orchestrator | 2025-10-09 10:32:53.408055 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-10-09 10:32:53.408059 | orchestrator | Thursday 09 October 2025 10:28:32 +0000 (0:00:00.679) 0:07:23.280 ****** 2025-10-09 10:32:53.408063 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.408067 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.408070 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.408074 | orchestrator | 2025-10-09 10:32:53.408078 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-10-09 10:32:53.408082 | orchestrator | Thursday 09 October 2025 10:28:33 +0000 (0:00:00.495) 0:07:23.776 ****** 2025-10-09 10:32:53.408085 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-10-09 10:32:53.408089 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-10-09 10:32:53.408093 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-10-09 10:32:53.408097 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-10-09 10:32:53.408100 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-10-09 10:32:53.408104 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-10-09 10:32:53.408108 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-10-09 10:32:53.408115 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-10-09 10:32:53.408119 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-10-09 10:32:53.408122 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-10-09 10:32:53.408126 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-10-09 10:32:53.408130 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-10-09 10:32:53.408134 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-10-09 10:32:53.408137 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-10-09 10:32:53.408141 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-10-09 10:32:53.408147 | orchestrator | 2025-10-09 10:32:53.408151 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-10-09 10:32:53.408155 | orchestrator | Thursday 09 October 2025 10:28:36 +0000 (0:00:03.348) 0:07:27.125 ****** 2025-10-09 10:32:53.408159 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.408163 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.408166 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.408170 | orchestrator | 2025-10-09 10:32:53.408174 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-10-09 10:32:53.408178 | orchestrator | Thursday 09 October 2025 10:28:36 +0000 (0:00:00.570) 0:07:27.695 ****** 2025-10-09 10:32:53.408181 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.408185 | orchestrator | 2025-10-09 10:32:53.408189 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-10-09 10:32:53.408192 | orchestrator | Thursday 09 October 2025 10:28:37 +0000 (0:00:00.544) 0:07:28.239 ****** 2025-10-09 10:32:53.408196 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-10-09 10:32:53.408203 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-10-09 10:32:53.408228 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-10-09 10:32:53.408232 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-10-09 10:32:53.408236 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-10-09 10:32:53.408240 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-10-09 10:32:53.408243 | orchestrator | 2025-10-09 10:32:53.408247 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-10-09 10:32:53.408251 | orchestrator | Thursday 09 October 2025 10:28:38 +0000 (0:00:00.998) 0:07:29.238 ****** 2025-10-09 10:32:53.408255 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:32:53.408258 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-10-09 10:32:53.408262 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-10-09 10:32:53.408266 | orchestrator | 2025-10-09 10:32:53.408270 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-10-09 10:32:53.408273 | orchestrator | Thursday 09 October 2025 10:28:40 +0000 (0:00:01.962) 0:07:31.201 ****** 2025-10-09 10:32:53.408277 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-10-09 10:32:53.408281 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-10-09 10:32:53.408284 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.408288 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-10-09 10:32:53.408292 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-10-09 10:32:53.408296 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.408299 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-10-09 10:32:53.408303 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-10-09 10:32:53.408307 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.408311 | orchestrator | 2025-10-09 10:32:53.408314 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-10-09 10:32:53.408318 | orchestrator | Thursday 09 October 2025 10:28:41 +0000 (0:00:01.442) 0:07:32.643 ****** 2025-10-09 10:32:53.408322 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:32:53.408325 | orchestrator | 2025-10-09 10:32:53.408329 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-10-09 10:32:53.408333 | orchestrator | Thursday 09 October 2025 10:28:43 +0000 (0:00:01.964) 0:07:34.609 ****** 2025-10-09 10:32:53.408337 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.408340 | orchestrator | 2025-10-09 10:32:53.408344 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-10-09 10:32:53.408348 | orchestrator | Thursday 09 October 2025 10:28:44 +0000 (0:00:00.573) 0:07:35.182 ****** 2025-10-09 10:32:53.408355 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee', 'data_vg': 'ceph-bec6f5a4-3c2e-53c4-9bd6-39a84a6eb9ee'}) 2025-10-09 10:32:53.408359 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0cbdaba5-e3a8-55ff-9207-33249002ea74', 'data_vg': 'ceph-0cbdaba5-e3a8-55ff-9207-33249002ea74'}) 2025-10-09 10:32:53.408363 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-83d577c9-ff1a-5f1d-bd0e-44f99d742f78', 'data_vg': 'ceph-83d577c9-ff1a-5f1d-bd0e-44f99d742f78'}) 2025-10-09 10:32:53.408370 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-db411f8a-05b0-54f7-b748-fd517a3c676f', 'data_vg': 'ceph-db411f8a-05b0-54f7-b748-fd517a3c676f'}) 2025-10-09 10:32:53.408374 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0b8397ec-b473-5fab-a988-270c3fd4ebb0', 'data_vg': 'ceph-0b8397ec-b473-5fab-a988-270c3fd4ebb0'}) 2025-10-09 10:32:53.408378 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8ce20a60-fba3-5536-8b48-1e48c039a9b4', 'data_vg': 'ceph-8ce20a60-fba3-5536-8b48-1e48c039a9b4'}) 2025-10-09 10:32:53.408381 | orchestrator | 2025-10-09 10:32:53.408385 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-10-09 10:32:53.408389 | orchestrator | Thursday 09 October 2025 10:29:24 +0000 (0:00:40.228) 0:08:15.411 ****** 2025-10-09 10:32:53.408393 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.408396 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.408400 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.408404 | orchestrator | 2025-10-09 10:32:53.408408 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-10-09 10:32:53.408411 | orchestrator | Thursday 09 October 2025 10:29:25 +0000 (0:00:00.664) 0:08:16.075 ****** 2025-10-09 10:32:53.408415 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.408419 | orchestrator | 2025-10-09 10:32:53.408423 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-10-09 10:32:53.408426 | orchestrator | Thursday 09 October 2025 10:29:25 +0000 (0:00:00.549) 0:08:16.624 ****** 2025-10-09 10:32:53.408430 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.408434 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.408437 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.408441 | orchestrator | 2025-10-09 10:32:53.408445 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-10-09 10:32:53.408449 | orchestrator | Thursday 09 October 2025 10:29:26 +0000 (0:00:00.698) 0:08:17.323 ****** 2025-10-09 10:32:53.408452 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.408456 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.408460 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.408464 | orchestrator | 2025-10-09 10:32:53.408467 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-10-09 10:32:53.408473 | orchestrator | Thursday 09 October 2025 10:29:29 +0000 (0:00:03.034) 0:08:20.357 ****** 2025-10-09 10:32:53.408477 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.408481 | orchestrator | 2025-10-09 10:32:53.408485 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-10-09 10:32:53.408489 | orchestrator | Thursday 09 October 2025 10:29:30 +0000 (0:00:00.590) 0:08:20.948 ****** 2025-10-09 10:32:53.408492 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.408496 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.408500 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.408503 | orchestrator | 2025-10-09 10:32:53.408507 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-10-09 10:32:53.408511 | orchestrator | Thursday 09 October 2025 10:29:31 +0000 (0:00:01.209) 0:08:22.157 ****** 2025-10-09 10:32:53.408515 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.408518 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.408525 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.408529 | orchestrator | 2025-10-09 10:32:53.408533 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-10-09 10:32:53.408536 | orchestrator | Thursday 09 October 2025 10:29:32 +0000 (0:00:01.503) 0:08:23.661 ****** 2025-10-09 10:32:53.408540 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.408544 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.408548 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.408551 | orchestrator | 2025-10-09 10:32:53.408555 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-10-09 10:32:53.408559 | orchestrator | Thursday 09 October 2025 10:29:34 +0000 (0:00:01.772) 0:08:25.433 ****** 2025-10-09 10:32:53.408562 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.408566 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.408570 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.408574 | orchestrator | 2025-10-09 10:32:53.408577 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-10-09 10:32:53.408581 | orchestrator | Thursday 09 October 2025 10:29:35 +0000 (0:00:00.401) 0:08:25.835 ****** 2025-10-09 10:32:53.408585 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.408588 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.408592 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.408596 | orchestrator | 2025-10-09 10:32:53.408600 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-10-09 10:32:53.408603 | orchestrator | Thursday 09 October 2025 10:29:35 +0000 (0:00:00.366) 0:08:26.201 ****** 2025-10-09 10:32:53.408607 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-10-09 10:32:53.408611 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-10-09 10:32:53.408614 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-10-09 10:32:53.408618 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-10-09 10:32:53.408622 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-10-09 10:32:53.408625 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-10-09 10:32:53.408629 | orchestrator | 2025-10-09 10:32:53.408633 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-10-09 10:32:53.408637 | orchestrator | Thursday 09 October 2025 10:29:36 +0000 (0:00:01.409) 0:08:27.611 ****** 2025-10-09 10:32:53.408640 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-10-09 10:32:53.408644 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-10-09 10:32:53.408648 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-10-09 10:32:53.408651 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-10-09 10:32:53.408655 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-10-09 10:32:53.408659 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-10-09 10:32:53.408663 | orchestrator | 2025-10-09 10:32:53.408666 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-10-09 10:32:53.408673 | orchestrator | Thursday 09 October 2025 10:29:39 +0000 (0:00:02.196) 0:08:29.808 ****** 2025-10-09 10:32:53.408677 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-10-09 10:32:53.408680 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-10-09 10:32:53.408684 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-10-09 10:32:53.408688 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-10-09 10:32:53.408691 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-10-09 10:32:53.408695 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-10-09 10:32:53.408699 | orchestrator | 2025-10-09 10:32:53.408703 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-10-09 10:32:53.408706 | orchestrator | Thursday 09 October 2025 10:29:42 +0000 (0:00:03.639) 0:08:33.448 ****** 2025-10-09 10:32:53.408710 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.408714 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.408718 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:32:53.408721 | orchestrator | 2025-10-09 10:32:53.408728 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-10-09 10:32:53.408732 | orchestrator | Thursday 09 October 2025 10:29:45 +0000 (0:00:02.662) 0:08:36.110 ****** 2025-10-09 10:32:53.408735 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.408739 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.408743 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-10-09 10:32:53.408747 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:32:53.408750 | orchestrator | 2025-10-09 10:32:53.408754 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-10-09 10:32:53.408758 | orchestrator | Thursday 09 October 2025 10:29:58 +0000 (0:00:12.775) 0:08:48.886 ****** 2025-10-09 10:32:53.408762 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.408765 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.408769 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.408773 | orchestrator | 2025-10-09 10:32:53.408776 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-10-09 10:32:53.408780 | orchestrator | Thursday 09 October 2025 10:29:59 +0000 (0:00:00.921) 0:08:49.808 ****** 2025-10-09 10:32:53.408784 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.408790 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.408794 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.408798 | orchestrator | 2025-10-09 10:32:53.408801 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-10-09 10:32:53.408805 | orchestrator | Thursday 09 October 2025 10:29:59 +0000 (0:00:00.663) 0:08:50.472 ****** 2025-10-09 10:32:53.408809 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.408813 | orchestrator | 2025-10-09 10:32:53.408817 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-10-09 10:32:53.408820 | orchestrator | Thursday 09 October 2025 10:30:00 +0000 (0:00:00.585) 0:08:51.058 ****** 2025-10-09 10:32:53.408824 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:32:53.408828 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:32:53.408831 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:32:53.408835 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.408839 | orchestrator | 2025-10-09 10:32:53.408843 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-10-09 10:32:53.408846 | orchestrator | Thursday 09 October 2025 10:30:00 +0000 (0:00:00.456) 0:08:51.514 ****** 2025-10-09 10:32:53.408850 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.408854 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.408857 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.408861 | orchestrator | 2025-10-09 10:32:53.408865 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-10-09 10:32:53.408869 | orchestrator | Thursday 09 October 2025 10:30:01 +0000 (0:00:00.632) 0:08:52.147 ****** 2025-10-09 10:32:53.408872 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.408876 | orchestrator | 2025-10-09 10:32:53.408880 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-10-09 10:32:53.408884 | orchestrator | Thursday 09 October 2025 10:30:01 +0000 (0:00:00.274) 0:08:52.422 ****** 2025-10-09 10:32:53.408887 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.408891 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.408895 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.408899 | orchestrator | 2025-10-09 10:32:53.408902 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-10-09 10:32:53.408906 | orchestrator | Thursday 09 October 2025 10:30:02 +0000 (0:00:00.370) 0:08:52.792 ****** 2025-10-09 10:32:53.408910 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.408914 | orchestrator | 2025-10-09 10:32:53.408917 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-10-09 10:32:53.408924 | orchestrator | Thursday 09 October 2025 10:30:02 +0000 (0:00:00.254) 0:08:53.047 ****** 2025-10-09 10:32:53.408928 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.408931 | orchestrator | 2025-10-09 10:32:53.408935 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-10-09 10:32:53.408939 | orchestrator | Thursday 09 October 2025 10:30:02 +0000 (0:00:00.266) 0:08:53.314 ****** 2025-10-09 10:32:53.408943 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.408947 | orchestrator | 2025-10-09 10:32:53.408950 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-10-09 10:32:53.408954 | orchestrator | Thursday 09 October 2025 10:30:02 +0000 (0:00:00.141) 0:08:53.455 ****** 2025-10-09 10:32:53.408958 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.408962 | orchestrator | 2025-10-09 10:32:53.408965 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-10-09 10:32:53.408969 | orchestrator | Thursday 09 October 2025 10:30:03 +0000 (0:00:00.266) 0:08:53.721 ****** 2025-10-09 10:32:53.408973 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.408977 | orchestrator | 2025-10-09 10:32:53.408983 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-10-09 10:32:53.408987 | orchestrator | Thursday 09 October 2025 10:30:03 +0000 (0:00:00.299) 0:08:54.021 ****** 2025-10-09 10:32:53.408990 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:32:53.408994 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:32:53.408998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:32:53.409002 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.409005 | orchestrator | 2025-10-09 10:32:53.409009 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-10-09 10:32:53.409013 | orchestrator | Thursday 09 October 2025 10:30:04 +0000 (0:00:00.808) 0:08:54.829 ****** 2025-10-09 10:32:53.409017 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.409020 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.409024 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.409028 | orchestrator | 2025-10-09 10:32:53.409031 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-10-09 10:32:53.409035 | orchestrator | Thursday 09 October 2025 10:30:04 +0000 (0:00:00.620) 0:08:55.449 ****** 2025-10-09 10:32:53.409039 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.409043 | orchestrator | 2025-10-09 10:32:53.409047 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-10-09 10:32:53.409050 | orchestrator | Thursday 09 October 2025 10:30:04 +0000 (0:00:00.235) 0:08:55.685 ****** 2025-10-09 10:32:53.409054 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.409058 | orchestrator | 2025-10-09 10:32:53.409062 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-10-09 10:32:53.409065 | orchestrator | 2025-10-09 10:32:53.409069 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-10-09 10:32:53.409073 | orchestrator | Thursday 09 October 2025 10:30:05 +0000 (0:00:00.746) 0:08:56.431 ****** 2025-10-09 10:32:53.409077 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.409080 | orchestrator | 2025-10-09 10:32:53.409084 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-10-09 10:32:53.409090 | orchestrator | Thursday 09 October 2025 10:30:07 +0000 (0:00:01.303) 0:08:57.734 ****** 2025-10-09 10:32:53.409094 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.409098 | orchestrator | 2025-10-09 10:32:53.409102 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-10-09 10:32:53.409109 | orchestrator | Thursday 09 October 2025 10:30:08 +0000 (0:00:01.281) 0:08:59.015 ****** 2025-10-09 10:32:53.409113 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.409117 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.409120 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.409124 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.409128 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.409131 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.409135 | orchestrator | 2025-10-09 10:32:53.409139 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-10-09 10:32:53.409143 | orchestrator | Thursday 09 October 2025 10:30:09 +0000 (0:00:00.891) 0:08:59.907 ****** 2025-10-09 10:32:53.409147 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.409150 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.409154 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.409158 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.409161 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.409165 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.409169 | orchestrator | 2025-10-09 10:32:53.409172 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-10-09 10:32:53.409176 | orchestrator | Thursday 09 October 2025 10:30:10 +0000 (0:00:01.089) 0:09:00.997 ****** 2025-10-09 10:32:53.409180 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.409184 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.409187 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.409191 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.409195 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.409199 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.409202 | orchestrator | 2025-10-09 10:32:53.409215 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-10-09 10:32:53.409219 | orchestrator | Thursday 09 October 2025 10:30:11 +0000 (0:00:01.336) 0:09:02.333 ****** 2025-10-09 10:32:53.409223 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.409226 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.409230 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.409234 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.409238 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.409241 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.409245 | orchestrator | 2025-10-09 10:32:53.409249 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-10-09 10:32:53.409253 | orchestrator | Thursday 09 October 2025 10:30:12 +0000 (0:00:01.082) 0:09:03.415 ****** 2025-10-09 10:32:53.409257 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.409260 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.409264 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.409268 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.409271 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.409275 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.409279 | orchestrator | 2025-10-09 10:32:53.409282 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-10-09 10:32:53.409286 | orchestrator | Thursday 09 October 2025 10:30:13 +0000 (0:00:01.075) 0:09:04.491 ****** 2025-10-09 10:32:53.409290 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.409294 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.409297 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.409301 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.409305 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.409309 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.409312 | orchestrator | 2025-10-09 10:32:53.409319 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-10-09 10:32:53.409323 | orchestrator | Thursday 09 October 2025 10:30:14 +0000 (0:00:00.670) 0:09:05.162 ****** 2025-10-09 10:32:53.409326 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.409330 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.409334 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.409340 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.409344 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.409348 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.409351 | orchestrator | 2025-10-09 10:32:53.409355 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-10-09 10:32:53.409359 | orchestrator | Thursday 09 October 2025 10:30:15 +0000 (0:00:00.917) 0:09:06.080 ****** 2025-10-09 10:32:53.409362 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.409366 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.409370 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.409374 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.409377 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.409381 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.409385 | orchestrator | 2025-10-09 10:32:53.409388 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-10-09 10:32:53.409392 | orchestrator | Thursday 09 October 2025 10:30:16 +0000 (0:00:01.057) 0:09:07.138 ****** 2025-10-09 10:32:53.409396 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.409399 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.409403 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.409407 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.409410 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.409414 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.409418 | orchestrator | 2025-10-09 10:32:53.409421 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-10-09 10:32:53.409425 | orchestrator | Thursday 09 October 2025 10:30:17 +0000 (0:00:01.316) 0:09:08.454 ****** 2025-10-09 10:32:53.409429 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.409433 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.409436 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.409440 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.409444 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.409448 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.409451 | orchestrator | 2025-10-09 10:32:53.409455 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-10-09 10:32:53.409461 | orchestrator | Thursday 09 October 2025 10:30:18 +0000 (0:00:00.626) 0:09:09.080 ****** 2025-10-09 10:32:53.409465 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.409469 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.409472 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.409476 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.409480 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.409483 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.409487 | orchestrator | 2025-10-09 10:32:53.409491 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-10-09 10:32:53.409494 | orchestrator | Thursday 09 October 2025 10:30:18 +0000 (0:00:00.606) 0:09:09.687 ****** 2025-10-09 10:32:53.409498 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.409502 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.409506 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.409509 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.409513 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.409517 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.409520 | orchestrator | 2025-10-09 10:32:53.409524 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-10-09 10:32:53.409528 | orchestrator | Thursday 09 October 2025 10:30:19 +0000 (0:00:00.924) 0:09:10.611 ****** 2025-10-09 10:32:53.409531 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.409535 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.409539 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.409542 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.409546 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.409550 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.409553 | orchestrator | 2025-10-09 10:32:53.409557 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-10-09 10:32:53.409564 | orchestrator | Thursday 09 October 2025 10:30:20 +0000 (0:00:00.622) 0:09:11.234 ****** 2025-10-09 10:32:53.409568 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.409571 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.409575 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.409579 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.409582 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.409586 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.409590 | orchestrator | 2025-10-09 10:32:53.409594 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-10-09 10:32:53.409597 | orchestrator | Thursday 09 October 2025 10:30:21 +0000 (0:00:00.959) 0:09:12.193 ****** 2025-10-09 10:32:53.409601 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.409605 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.409609 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.409612 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.409616 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.409620 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.409623 | orchestrator | 2025-10-09 10:32:53.409627 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-10-09 10:32:53.409631 | orchestrator | Thursday 09 October 2025 10:30:22 +0000 (0:00:00.635) 0:09:12.829 ****** 2025-10-09 10:32:53.409635 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:32:53.409638 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:32:53.409642 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:32:53.409646 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.409649 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.409653 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.409657 | orchestrator | 2025-10-09 10:32:53.409661 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-10-09 10:32:53.409664 | orchestrator | Thursday 09 October 2025 10:30:23 +0000 (0:00:00.943) 0:09:13.772 ****** 2025-10-09 10:32:53.409668 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.409672 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.409675 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.409679 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.409683 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.409687 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.409690 | orchestrator | 2025-10-09 10:32:53.409697 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-10-09 10:32:53.409701 | orchestrator | Thursday 09 October 2025 10:30:23 +0000 (0:00:00.595) 0:09:14.368 ****** 2025-10-09 10:32:53.409704 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.409708 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.409712 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.409715 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.409719 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.409723 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.409726 | orchestrator | 2025-10-09 10:32:53.409730 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-10-09 10:32:53.409734 | orchestrator | Thursday 09 October 2025 10:30:24 +0000 (0:00:01.005) 0:09:15.373 ****** 2025-10-09 10:32:53.409737 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.409741 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.409745 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.409748 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.409752 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.409756 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.409759 | orchestrator | 2025-10-09 10:32:53.409763 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-10-09 10:32:53.409767 | orchestrator | Thursday 09 October 2025 10:30:25 +0000 (0:00:01.313) 0:09:16.687 ****** 2025-10-09 10:32:53.409771 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.409774 | orchestrator | 2025-10-09 10:32:53.409778 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-10-09 10:32:53.409785 | orchestrator | Thursday 09 October 2025 10:30:29 +0000 (0:00:04.006) 0:09:20.693 ****** 2025-10-09 10:32:53.409788 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.409792 | orchestrator | 2025-10-09 10:32:53.409796 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-10-09 10:32:53.409800 | orchestrator | Thursday 09 October 2025 10:30:31 +0000 (0:00:02.022) 0:09:22.716 ****** 2025-10-09 10:32:53.409803 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.409807 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.409811 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.409814 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.409818 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.409822 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.409825 | orchestrator | 2025-10-09 10:32:53.409831 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-10-09 10:32:53.409835 | orchestrator | Thursday 09 October 2025 10:30:33 +0000 (0:00:01.775) 0:09:24.492 ****** 2025-10-09 10:32:53.409839 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.409843 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.409846 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.409850 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.409854 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.409857 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.409861 | orchestrator | 2025-10-09 10:32:53.409865 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-10-09 10:32:53.409868 | orchestrator | Thursday 09 October 2025 10:30:34 +0000 (0:00:01.019) 0:09:25.511 ****** 2025-10-09 10:32:53.409872 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.409876 | orchestrator | 2025-10-09 10:32:53.409880 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-10-09 10:32:53.409884 | orchestrator | Thursday 09 October 2025 10:30:36 +0000 (0:00:01.376) 0:09:26.887 ****** 2025-10-09 10:32:53.409888 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.409892 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.409895 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.409899 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.409903 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.409907 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.409910 | orchestrator | 2025-10-09 10:32:53.409914 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-10-09 10:32:53.409918 | orchestrator | Thursday 09 October 2025 10:30:38 +0000 (0:00:01.851) 0:09:28.739 ****** 2025-10-09 10:32:53.409922 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.409925 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.409929 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.409933 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.409937 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.409941 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.409944 | orchestrator | 2025-10-09 10:32:53.409948 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-10-09 10:32:53.409952 | orchestrator | Thursday 09 October 2025 10:30:41 +0000 (0:00:03.906) 0:09:32.646 ****** 2025-10-09 10:32:53.409956 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.409959 | orchestrator | 2025-10-09 10:32:53.409963 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-10-09 10:32:53.409967 | orchestrator | Thursday 09 October 2025 10:30:43 +0000 (0:00:01.343) 0:09:33.989 ****** 2025-10-09 10:32:53.409970 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.409974 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.409981 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.409984 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.409988 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.409992 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.409995 | orchestrator | 2025-10-09 10:32:53.409999 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-10-09 10:32:53.410003 | orchestrator | Thursday 09 October 2025 10:30:43 +0000 (0:00:00.678) 0:09:34.667 ****** 2025-10-09 10:32:53.410007 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:32:53.410010 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:32:53.410050 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:32:53.410056 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.410060 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.410064 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.410068 | orchestrator | 2025-10-09 10:32:53.410072 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-10-09 10:32:53.410078 | orchestrator | Thursday 09 October 2025 10:30:46 +0000 (0:00:02.445) 0:09:37.112 ****** 2025-10-09 10:32:53.410082 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:32:53.410086 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:32:53.410090 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:32:53.410094 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.410097 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.410101 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.410105 | orchestrator | 2025-10-09 10:32:53.410109 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-10-09 10:32:53.410112 | orchestrator | 2025-10-09 10:32:53.410116 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-10-09 10:32:53.410120 | orchestrator | Thursday 09 October 2025 10:30:47 +0000 (0:00:01.184) 0:09:38.297 ****** 2025-10-09 10:32:53.410124 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.410127 | orchestrator | 2025-10-09 10:32:53.410131 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-10-09 10:32:53.410135 | orchestrator | Thursday 09 October 2025 10:30:48 +0000 (0:00:00.527) 0:09:38.824 ****** 2025-10-09 10:32:53.410139 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.410142 | orchestrator | 2025-10-09 10:32:53.410146 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-10-09 10:32:53.410150 | orchestrator | Thursday 09 October 2025 10:30:48 +0000 (0:00:00.790) 0:09:39.614 ****** 2025-10-09 10:32:53.410154 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.410157 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.410161 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.410165 | orchestrator | 2025-10-09 10:32:53.410168 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-10-09 10:32:53.410172 | orchestrator | Thursday 09 October 2025 10:30:49 +0000 (0:00:00.347) 0:09:39.961 ****** 2025-10-09 10:32:53.410176 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.410180 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.410183 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.410187 | orchestrator | 2025-10-09 10:32:53.410194 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-10-09 10:32:53.410198 | orchestrator | Thursday 09 October 2025 10:30:49 +0000 (0:00:00.715) 0:09:40.677 ****** 2025-10-09 10:32:53.410202 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.410214 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.410218 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.410222 | orchestrator | 2025-10-09 10:32:53.410226 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-10-09 10:32:53.410229 | orchestrator | Thursday 09 October 2025 10:30:50 +0000 (0:00:00.704) 0:09:41.382 ****** 2025-10-09 10:32:53.410233 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.410240 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.410244 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.410248 | orchestrator | 2025-10-09 10:32:53.410251 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-10-09 10:32:53.410255 | orchestrator | Thursday 09 October 2025 10:30:51 +0000 (0:00:01.093) 0:09:42.475 ****** 2025-10-09 10:32:53.410259 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.410263 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.410266 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.410270 | orchestrator | 2025-10-09 10:32:53.410274 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-10-09 10:32:53.410278 | orchestrator | Thursday 09 October 2025 10:30:52 +0000 (0:00:00.324) 0:09:42.799 ****** 2025-10-09 10:32:53.410282 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.410285 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.410289 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.410293 | orchestrator | 2025-10-09 10:32:53.410297 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-10-09 10:32:53.410300 | orchestrator | Thursday 09 October 2025 10:30:52 +0000 (0:00:00.341) 0:09:43.141 ****** 2025-10-09 10:32:53.410304 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.410308 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.410312 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.410315 | orchestrator | 2025-10-09 10:32:53.410319 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-10-09 10:32:53.410323 | orchestrator | Thursday 09 October 2025 10:30:52 +0000 (0:00:00.285) 0:09:43.426 ****** 2025-10-09 10:32:53.410327 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.410331 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.410334 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.410338 | orchestrator | 2025-10-09 10:32:53.410342 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-10-09 10:32:53.410346 | orchestrator | Thursday 09 October 2025 10:30:53 +0000 (0:00:01.027) 0:09:44.454 ****** 2025-10-09 10:32:53.410349 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.410353 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.410357 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.410361 | orchestrator | 2025-10-09 10:32:53.410364 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-10-09 10:32:53.410368 | orchestrator | Thursday 09 October 2025 10:30:54 +0000 (0:00:00.744) 0:09:45.198 ****** 2025-10-09 10:32:53.410372 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.410376 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.410379 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.410383 | orchestrator | 2025-10-09 10:32:53.410387 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-10-09 10:32:53.410391 | orchestrator | Thursday 09 October 2025 10:30:54 +0000 (0:00:00.326) 0:09:45.525 ****** 2025-10-09 10:32:53.410395 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.410398 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.410402 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.410406 | orchestrator | 2025-10-09 10:32:53.410410 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-10-09 10:32:53.410413 | orchestrator | Thursday 09 October 2025 10:30:55 +0000 (0:00:00.350) 0:09:45.875 ****** 2025-10-09 10:32:53.410420 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.410424 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.410428 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.410431 | orchestrator | 2025-10-09 10:32:53.410435 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-10-09 10:32:53.410439 | orchestrator | Thursday 09 October 2025 10:30:55 +0000 (0:00:00.648) 0:09:46.524 ****** 2025-10-09 10:32:53.410443 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.410446 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.410450 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.410457 | orchestrator | 2025-10-09 10:32:53.410461 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-10-09 10:32:53.410464 | orchestrator | Thursday 09 October 2025 10:30:56 +0000 (0:00:00.319) 0:09:46.843 ****** 2025-10-09 10:32:53.410468 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.410472 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.410476 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.410480 | orchestrator | 2025-10-09 10:32:53.410483 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-10-09 10:32:53.410487 | orchestrator | Thursday 09 October 2025 10:30:56 +0000 (0:00:00.340) 0:09:47.184 ****** 2025-10-09 10:32:53.410491 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.410495 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.410498 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.410502 | orchestrator | 2025-10-09 10:32:53.410506 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-10-09 10:32:53.410510 | orchestrator | Thursday 09 October 2025 10:30:56 +0000 (0:00:00.341) 0:09:47.525 ****** 2025-10-09 10:32:53.410514 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.410517 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.410521 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.410525 | orchestrator | 2025-10-09 10:32:53.410529 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-10-09 10:32:53.410532 | orchestrator | Thursday 09 October 2025 10:30:57 +0000 (0:00:00.604) 0:09:48.130 ****** 2025-10-09 10:32:53.410536 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.410540 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.410544 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.410547 | orchestrator | 2025-10-09 10:32:53.410551 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-10-09 10:32:53.410557 | orchestrator | Thursday 09 October 2025 10:30:57 +0000 (0:00:00.311) 0:09:48.442 ****** 2025-10-09 10:32:53.410561 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.410565 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.410569 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.410573 | orchestrator | 2025-10-09 10:32:53.410576 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-10-09 10:32:53.410580 | orchestrator | Thursday 09 October 2025 10:30:58 +0000 (0:00:00.362) 0:09:48.805 ****** 2025-10-09 10:32:53.410584 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.410588 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.410592 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.410595 | orchestrator | 2025-10-09 10:32:53.410599 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-10-09 10:32:53.410603 | orchestrator | Thursday 09 October 2025 10:30:58 +0000 (0:00:00.853) 0:09:49.658 ****** 2025-10-09 10:32:53.410607 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.410611 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.410614 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-10-09 10:32:53.410618 | orchestrator | 2025-10-09 10:32:53.410622 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-10-09 10:32:53.410626 | orchestrator | Thursday 09 October 2025 10:30:59 +0000 (0:00:00.458) 0:09:50.116 ****** 2025-10-09 10:32:53.410630 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:32:53.410633 | orchestrator | 2025-10-09 10:32:53.410637 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-10-09 10:32:53.410641 | orchestrator | Thursday 09 October 2025 10:31:01 +0000 (0:00:02.077) 0:09:52.194 ****** 2025-10-09 10:32:53.410645 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-10-09 10:32:53.410650 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.410659 | orchestrator | 2025-10-09 10:32:53.410663 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-10-09 10:32:53.410667 | orchestrator | Thursday 09 October 2025 10:31:01 +0000 (0:00:00.318) 0:09:52.512 ****** 2025-10-09 10:32:53.410671 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-10-09 10:32:53.410679 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-10-09 10:32:53.410683 | orchestrator | 2025-10-09 10:32:53.410687 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-10-09 10:32:53.410691 | orchestrator | Thursday 09 October 2025 10:31:09 +0000 (0:00:07.819) 0:10:00.332 ****** 2025-10-09 10:32:53.410695 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:32:53.410699 | orchestrator | 2025-10-09 10:32:53.410702 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-10-09 10:32:53.410706 | orchestrator | Thursday 09 October 2025 10:31:13 +0000 (0:00:03.600) 0:10:03.932 ****** 2025-10-09 10:32:53.410713 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.410717 | orchestrator | 2025-10-09 10:32:53.410721 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-10-09 10:32:53.410725 | orchestrator | Thursday 09 October 2025 10:31:14 +0000 (0:00:00.899) 0:10:04.831 ****** 2025-10-09 10:32:53.410728 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-10-09 10:32:53.410732 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-10-09 10:32:53.410736 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-10-09 10:32:53.410740 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-10-09 10:32:53.410744 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-10-09 10:32:53.410747 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-10-09 10:32:53.410751 | orchestrator | 2025-10-09 10:32:53.410755 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-10-09 10:32:53.410759 | orchestrator | Thursday 09 October 2025 10:31:15 +0000 (0:00:01.093) 0:10:05.925 ****** 2025-10-09 10:32:53.410762 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:32:53.410766 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-10-09 10:32:53.410770 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-10-09 10:32:53.410774 | orchestrator | 2025-10-09 10:32:53.410778 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-10-09 10:32:53.410781 | orchestrator | Thursday 09 October 2025 10:31:17 +0000 (0:00:02.132) 0:10:08.057 ****** 2025-10-09 10:32:53.410785 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-10-09 10:32:53.410789 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-10-09 10:32:53.410793 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.410797 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-10-09 10:32:53.410801 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-10-09 10:32:53.410807 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.410811 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-10-09 10:32:53.410815 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-10-09 10:32:53.410818 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.410822 | orchestrator | 2025-10-09 10:32:53.410826 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-10-09 10:32:53.410833 | orchestrator | Thursday 09 October 2025 10:31:18 +0000 (0:00:01.496) 0:10:09.554 ****** 2025-10-09 10:32:53.410837 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.410841 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.410845 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.410848 | orchestrator | 2025-10-09 10:32:53.410852 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-10-09 10:32:53.410856 | orchestrator | Thursday 09 October 2025 10:31:21 +0000 (0:00:03.069) 0:10:12.624 ****** 2025-10-09 10:32:53.410860 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.410863 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.410867 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.410871 | orchestrator | 2025-10-09 10:32:53.410875 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-10-09 10:32:53.410878 | orchestrator | Thursday 09 October 2025 10:31:22 +0000 (0:00:00.518) 0:10:13.142 ****** 2025-10-09 10:32:53.410882 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.410886 | orchestrator | 2025-10-09 10:32:53.410890 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-10-09 10:32:53.410894 | orchestrator | Thursday 09 October 2025 10:31:23 +0000 (0:00:00.790) 0:10:13.933 ****** 2025-10-09 10:32:53.410897 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.410901 | orchestrator | 2025-10-09 10:32:53.410905 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-10-09 10:32:53.410909 | orchestrator | Thursday 09 October 2025 10:31:24 +0000 (0:00:00.841) 0:10:14.775 ****** 2025-10-09 10:32:53.410913 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.410916 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.410920 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.410924 | orchestrator | 2025-10-09 10:32:53.410928 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-10-09 10:32:53.410931 | orchestrator | Thursday 09 October 2025 10:31:25 +0000 (0:00:01.473) 0:10:16.248 ****** 2025-10-09 10:32:53.410935 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.410939 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.410943 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.410946 | orchestrator | 2025-10-09 10:32:53.410950 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-10-09 10:32:53.410954 | orchestrator | Thursday 09 October 2025 10:31:26 +0000 (0:00:01.342) 0:10:17.590 ****** 2025-10-09 10:32:53.410958 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.410962 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.410965 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.410969 | orchestrator | 2025-10-09 10:32:53.410973 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-10-09 10:32:53.410977 | orchestrator | Thursday 09 October 2025 10:31:29 +0000 (0:00:02.426) 0:10:20.016 ****** 2025-10-09 10:32:53.410981 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.410984 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.410988 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.410992 | orchestrator | 2025-10-09 10:32:53.410996 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-10-09 10:32:53.410999 | orchestrator | Thursday 09 October 2025 10:31:31 +0000 (0:00:02.039) 0:10:22.056 ****** 2025-10-09 10:32:53.411003 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.411009 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.411013 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.411017 | orchestrator | 2025-10-09 10:32:53.411021 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-10-09 10:32:53.411024 | orchestrator | Thursday 09 October 2025 10:31:33 +0000 (0:00:01.916) 0:10:23.973 ****** 2025-10-09 10:32:53.411028 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.411036 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.411039 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.411043 | orchestrator | 2025-10-09 10:32:53.411047 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-10-09 10:32:53.411051 | orchestrator | Thursday 09 October 2025 10:31:34 +0000 (0:00:00.817) 0:10:24.790 ****** 2025-10-09 10:32:53.411055 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.411058 | orchestrator | 2025-10-09 10:32:53.411062 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-10-09 10:32:53.411066 | orchestrator | Thursday 09 October 2025 10:31:34 +0000 (0:00:00.618) 0:10:25.409 ****** 2025-10-09 10:32:53.411070 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.411074 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.411077 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.411081 | orchestrator | 2025-10-09 10:32:53.411085 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-10-09 10:32:53.411089 | orchestrator | Thursday 09 October 2025 10:31:35 +0000 (0:00:00.641) 0:10:26.050 ****** 2025-10-09 10:32:53.411092 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.411096 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.411100 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.411104 | orchestrator | 2025-10-09 10:32:53.411107 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-10-09 10:32:53.411111 | orchestrator | Thursday 09 October 2025 10:31:36 +0000 (0:00:01.259) 0:10:27.310 ****** 2025-10-09 10:32:53.411115 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:32:53.411119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:32:53.411123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:32:53.411128 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.411132 | orchestrator | 2025-10-09 10:32:53.411136 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-10-09 10:32:53.411140 | orchestrator | Thursday 09 October 2025 10:31:37 +0000 (0:00:00.654) 0:10:27.965 ****** 2025-10-09 10:32:53.411144 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.411148 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.411151 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.411155 | orchestrator | 2025-10-09 10:32:53.411159 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-10-09 10:32:53.411163 | orchestrator | 2025-10-09 10:32:53.411166 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-10-09 10:32:53.411170 | orchestrator | Thursday 09 October 2025 10:31:37 +0000 (0:00:00.560) 0:10:28.526 ****** 2025-10-09 10:32:53.411174 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.411178 | orchestrator | 2025-10-09 10:32:53.411182 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-10-09 10:32:53.411185 | orchestrator | Thursday 09 October 2025 10:31:38 +0000 (0:00:00.814) 0:10:29.340 ****** 2025-10-09 10:32:53.411189 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.411193 | orchestrator | 2025-10-09 10:32:53.411197 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-10-09 10:32:53.411201 | orchestrator | Thursday 09 October 2025 10:31:39 +0000 (0:00:00.541) 0:10:29.882 ****** 2025-10-09 10:32:53.411204 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.411229 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.411233 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.411236 | orchestrator | 2025-10-09 10:32:53.411240 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-10-09 10:32:53.411244 | orchestrator | Thursday 09 October 2025 10:31:39 +0000 (0:00:00.619) 0:10:30.502 ****** 2025-10-09 10:32:53.411252 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.411255 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.411259 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.411263 | orchestrator | 2025-10-09 10:32:53.411267 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-10-09 10:32:53.411270 | orchestrator | Thursday 09 October 2025 10:31:40 +0000 (0:00:00.847) 0:10:31.350 ****** 2025-10-09 10:32:53.411274 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.411278 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.411281 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.411285 | orchestrator | 2025-10-09 10:32:53.411289 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-10-09 10:32:53.411293 | orchestrator | Thursday 09 October 2025 10:31:41 +0000 (0:00:00.725) 0:10:32.075 ****** 2025-10-09 10:32:53.411297 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.411300 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.411304 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.411308 | orchestrator | 2025-10-09 10:32:53.411311 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-10-09 10:32:53.411315 | orchestrator | Thursday 09 October 2025 10:31:42 +0000 (0:00:00.715) 0:10:32.791 ****** 2025-10-09 10:32:53.411319 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.411323 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.411326 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.411330 | orchestrator | 2025-10-09 10:32:53.411334 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-10-09 10:32:53.411338 | orchestrator | Thursday 09 October 2025 10:31:42 +0000 (0:00:00.632) 0:10:33.424 ****** 2025-10-09 10:32:53.411341 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.411345 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.411349 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.411353 | orchestrator | 2025-10-09 10:32:53.411359 | orchestrator | TASK [ceph-handler : Check for2025-10-09 10:32:53 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:53.411363 | orchestrator | a nfs container] ******************************** 2025-10-09 10:32:53.411367 | orchestrator | Thursday 09 October 2025 10:31:43 +0000 (0:00:00.355) 0:10:33.779 ****** 2025-10-09 10:32:53.411371 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.411374 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.411378 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.411382 | orchestrator | 2025-10-09 10:32:53.411385 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-10-09 10:32:53.411389 | orchestrator | Thursday 09 October 2025 10:31:43 +0000 (0:00:00.365) 0:10:34.145 ****** 2025-10-09 10:32:53.411393 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.411397 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.411401 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.411404 | orchestrator | 2025-10-09 10:32:53.411408 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-10-09 10:32:53.411412 | orchestrator | Thursday 09 October 2025 10:31:44 +0000 (0:00:00.735) 0:10:34.880 ****** 2025-10-09 10:32:53.411415 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.411419 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.411423 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.411427 | orchestrator | 2025-10-09 10:32:53.411430 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-10-09 10:32:53.411434 | orchestrator | Thursday 09 October 2025 10:31:45 +0000 (0:00:01.103) 0:10:35.984 ****** 2025-10-09 10:32:53.411438 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.411442 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.411445 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.411449 | orchestrator | 2025-10-09 10:32:53.411453 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-10-09 10:32:53.411457 | orchestrator | Thursday 09 October 2025 10:31:45 +0000 (0:00:00.343) 0:10:36.328 ****** 2025-10-09 10:32:53.411463 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.411467 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.411471 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.411475 | orchestrator | 2025-10-09 10:32:53.411478 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-10-09 10:32:53.411485 | orchestrator | Thursday 09 October 2025 10:31:45 +0000 (0:00:00.327) 0:10:36.655 ****** 2025-10-09 10:32:53.411489 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.411492 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.411496 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.411500 | orchestrator | 2025-10-09 10:32:53.411504 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-10-09 10:32:53.411508 | orchestrator | Thursday 09 October 2025 10:31:46 +0000 (0:00:00.360) 0:10:37.016 ****** 2025-10-09 10:32:53.411511 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.411515 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.411519 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.411522 | orchestrator | 2025-10-09 10:32:53.411526 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-10-09 10:32:53.411530 | orchestrator | Thursday 09 October 2025 10:31:46 +0000 (0:00:00.669) 0:10:37.686 ****** 2025-10-09 10:32:53.411534 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.411537 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.411541 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.411545 | orchestrator | 2025-10-09 10:32:53.411549 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-10-09 10:32:53.411552 | orchestrator | Thursday 09 October 2025 10:31:47 +0000 (0:00:00.373) 0:10:38.060 ****** 2025-10-09 10:32:53.411556 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.411560 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.411564 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.411567 | orchestrator | 2025-10-09 10:32:53.411571 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-10-09 10:32:53.411575 | orchestrator | Thursday 09 October 2025 10:31:47 +0000 (0:00:00.315) 0:10:38.375 ****** 2025-10-09 10:32:53.411579 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.411582 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.411586 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.411590 | orchestrator | 2025-10-09 10:32:53.411593 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-10-09 10:32:53.411597 | orchestrator | Thursday 09 October 2025 10:31:47 +0000 (0:00:00.311) 0:10:38.687 ****** 2025-10-09 10:32:53.411601 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.411605 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.411609 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.411612 | orchestrator | 2025-10-09 10:32:53.411616 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-10-09 10:32:53.411620 | orchestrator | Thursday 09 October 2025 10:31:48 +0000 (0:00:00.629) 0:10:39.316 ****** 2025-10-09 10:32:53.411623 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.411627 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.411631 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.411635 | orchestrator | 2025-10-09 10:32:53.411638 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-10-09 10:32:53.411642 | orchestrator | Thursday 09 October 2025 10:31:48 +0000 (0:00:00.377) 0:10:39.694 ****** 2025-10-09 10:32:53.411646 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.411650 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.411653 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.411657 | orchestrator | 2025-10-09 10:32:53.411661 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-10-09 10:32:53.411665 | orchestrator | Thursday 09 October 2025 10:31:49 +0000 (0:00:00.568) 0:10:40.262 ****** 2025-10-09 10:32:53.411668 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.411675 | orchestrator | 2025-10-09 10:32:53.411679 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-10-09 10:32:53.411683 | orchestrator | Thursday 09 October 2025 10:31:50 +0000 (0:00:00.878) 0:10:41.140 ****** 2025-10-09 10:32:53.411686 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:32:53.411692 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-10-09 10:32:53.411696 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-10-09 10:32:53.411700 | orchestrator | 2025-10-09 10:32:53.411704 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-10-09 10:32:53.411707 | orchestrator | Thursday 09 October 2025 10:31:52 +0000 (0:00:02.146) 0:10:43.287 ****** 2025-10-09 10:32:53.411711 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-10-09 10:32:53.411715 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-10-09 10:32:53.411719 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.411722 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-10-09 10:32:53.411726 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-10-09 10:32:53.411730 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.411733 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-10-09 10:32:53.411737 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-10-09 10:32:53.411741 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.411745 | orchestrator | 2025-10-09 10:32:53.411748 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-10-09 10:32:53.411752 | orchestrator | Thursday 09 October 2025 10:31:53 +0000 (0:00:01.243) 0:10:44.531 ****** 2025-10-09 10:32:53.411756 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.411760 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.411763 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.411767 | orchestrator | 2025-10-09 10:32:53.411771 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-10-09 10:32:53.411774 | orchestrator | Thursday 09 October 2025 10:31:54 +0000 (0:00:00.619) 0:10:45.151 ****** 2025-10-09 10:32:53.411778 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.411782 | orchestrator | 2025-10-09 10:32:53.411786 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-10-09 10:32:53.411790 | orchestrator | Thursday 09 October 2025 10:31:55 +0000 (0:00:00.572) 0:10:45.723 ****** 2025-10-09 10:32:53.411796 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-10-09 10:32:53.411800 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-10-09 10:32:53.411804 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-10-09 10:32:53.411807 | orchestrator | 2025-10-09 10:32:53.411811 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-10-09 10:32:53.411815 | orchestrator | Thursday 09 October 2025 10:31:55 +0000 (0:00:00.832) 0:10:46.556 ****** 2025-10-09 10:32:53.411819 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:32:53.411823 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-10-09 10:32:53.411826 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:32:53.411830 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-10-09 10:32:53.411834 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:32:53.411840 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-10-09 10:32:53.411844 | orchestrator | 2025-10-09 10:32:53.411848 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-10-09 10:32:53.411852 | orchestrator | Thursday 09 October 2025 10:32:00 +0000 (0:00:04.941) 0:10:51.497 ****** 2025-10-09 10:32:53.411856 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:32:53.411859 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-10-09 10:32:53.411863 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:32:53.411867 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-10-09 10:32:53.411870 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:32:53.411874 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-10-09 10:32:53.411878 | orchestrator | 2025-10-09 10:32:53.411882 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-10-09 10:32:53.411885 | orchestrator | Thursday 09 October 2025 10:32:03 +0000 (0:00:02.307) 0:10:53.804 ****** 2025-10-09 10:32:53.411889 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-10-09 10:32:53.411893 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.411897 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-10-09 10:32:53.411901 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.411904 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-10-09 10:32:53.411908 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.411912 | orchestrator | 2025-10-09 10:32:53.411916 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-10-09 10:32:53.411919 | orchestrator | Thursday 09 October 2025 10:32:04 +0000 (0:00:01.332) 0:10:55.137 ****** 2025-10-09 10:32:53.411923 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-10-09 10:32:53.411927 | orchestrator | 2025-10-09 10:32:53.411932 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-10-09 10:32:53.411936 | orchestrator | Thursday 09 October 2025 10:32:04 +0000 (0:00:00.244) 0:10:55.381 ****** 2025-10-09 10:32:53.411940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-09 10:32:53.411944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-09 10:32:53.411948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-09 10:32:53.411951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-09 10:32:53.411955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-09 10:32:53.411959 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.411963 | orchestrator | 2025-10-09 10:32:53.411966 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-10-09 10:32:53.411970 | orchestrator | Thursday 09 October 2025 10:32:05 +0000 (0:00:00.912) 0:10:56.294 ****** 2025-10-09 10:32:53.411974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-09 10:32:53.411978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-09 10:32:53.411981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-09 10:32:53.411991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-09 10:32:53.411995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-10-09 10:32:53.411999 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.412003 | orchestrator | 2025-10-09 10:32:53.412007 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-10-09 10:32:53.412010 | orchestrator | Thursday 09 October 2025 10:32:06 +0000 (0:00:00.920) 0:10:57.214 ****** 2025-10-09 10:32:53.412014 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-10-09 10:32:53.412018 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-10-09 10:32:53.412022 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-10-09 10:32:53.412026 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-10-09 10:32:53.412029 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-10-09 10:32:53.412033 | orchestrator | 2025-10-09 10:32:53.412037 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-10-09 10:32:53.412041 | orchestrator | Thursday 09 October 2025 10:32:38 +0000 (0:00:32.170) 0:11:29.385 ****** 2025-10-09 10:32:53.412044 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.412048 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.412052 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.412056 | orchestrator | 2025-10-09 10:32:53.412059 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-10-09 10:32:53.412063 | orchestrator | Thursday 09 October 2025 10:32:39 +0000 (0:00:00.633) 0:11:30.019 ****** 2025-10-09 10:32:53.412067 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.412070 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.412074 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.412078 | orchestrator | 2025-10-09 10:32:53.412082 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-10-09 10:32:53.412085 | orchestrator | Thursday 09 October 2025 10:32:39 +0000 (0:00:00.336) 0:11:30.355 ****** 2025-10-09 10:32:53.412089 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.412093 | orchestrator | 2025-10-09 10:32:53.412097 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-10-09 10:32:53.412100 | orchestrator | Thursday 09 October 2025 10:32:40 +0000 (0:00:00.576) 0:11:30.932 ****** 2025-10-09 10:32:53.412104 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.412108 | orchestrator | 2025-10-09 10:32:53.412112 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-10-09 10:32:53.412115 | orchestrator | Thursday 09 October 2025 10:32:41 +0000 (0:00:00.843) 0:11:31.775 ****** 2025-10-09 10:32:53.412119 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.412123 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.412129 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.412133 | orchestrator | 2025-10-09 10:32:53.412137 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-10-09 10:32:53.412140 | orchestrator | Thursday 09 October 2025 10:32:42 +0000 (0:00:01.339) 0:11:33.115 ****** 2025-10-09 10:32:53.412144 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.412151 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.412154 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.412158 | orchestrator | 2025-10-09 10:32:53.412162 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-10-09 10:32:53.412166 | orchestrator | Thursday 09 October 2025 10:32:43 +0000 (0:00:01.206) 0:11:34.321 ****** 2025-10-09 10:32:53.412169 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:32:53.412173 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:32:53.412177 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:32:53.412180 | orchestrator | 2025-10-09 10:32:53.412184 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-10-09 10:32:53.412188 | orchestrator | Thursday 09 October 2025 10:32:45 +0000 (0:00:02.057) 0:11:36.379 ****** 2025-10-09 10:32:53.412192 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-10-09 10:32:53.412196 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-10-09 10:32:53.412199 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-10-09 10:32:53.412203 | orchestrator | 2025-10-09 10:32:53.412217 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-10-09 10:32:53.412221 | orchestrator | Thursday 09 October 2025 10:32:48 +0000 (0:00:02.445) 0:11:38.824 ****** 2025-10-09 10:32:53.412225 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.412229 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.412232 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.412236 | orchestrator | 2025-10-09 10:32:53.412242 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-10-09 10:32:53.412246 | orchestrator | Thursday 09 October 2025 10:32:48 +0000 (0:00:00.654) 0:11:39.479 ****** 2025-10-09 10:32:53.412250 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:32:53.412253 | orchestrator | 2025-10-09 10:32:53.412257 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-10-09 10:32:53.412261 | orchestrator | Thursday 09 October 2025 10:32:49 +0000 (0:00:00.607) 0:11:40.087 ****** 2025-10-09 10:32:53.412265 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.412268 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.412272 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.412276 | orchestrator | 2025-10-09 10:32:53.412279 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-10-09 10:32:53.412283 | orchestrator | Thursday 09 October 2025 10:32:49 +0000 (0:00:00.315) 0:11:40.402 ****** 2025-10-09 10:32:53.412287 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.412291 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:32:53.412294 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:32:53.412298 | orchestrator | 2025-10-09 10:32:53.412302 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-10-09 10:32:53.412305 | orchestrator | Thursday 09 October 2025 10:32:50 +0000 (0:00:00.679) 0:11:41.082 ****** 2025-10-09 10:32:53.412309 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:32:53.412313 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:32:53.412317 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:32:53.412320 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:32:53.412324 | orchestrator | 2025-10-09 10:32:53.412328 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-10-09 10:32:53.412331 | orchestrator | Thursday 09 October 2025 10:32:51 +0000 (0:00:00.685) 0:11:41.767 ****** 2025-10-09 10:32:53.412335 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:32:53.412339 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:32:53.412343 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:32:53.412349 | orchestrator | 2025-10-09 10:32:53.412353 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:32:53.412357 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-10-09 10:32:53.412360 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-10-09 10:32:53.412364 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-10-09 10:32:53.412368 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-10-09 10:32:53.412372 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-10-09 10:32:53.412376 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-10-09 10:32:53.412379 | orchestrator | 2025-10-09 10:32:53.412383 | orchestrator | 2025-10-09 10:32:53.412387 | orchestrator | 2025-10-09 10:32:53.412391 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:32:53.412396 | orchestrator | Thursday 09 October 2025 10:32:51 +0000 (0:00:00.259) 0:11:42.027 ****** 2025-10-09 10:32:53.412400 | orchestrator | =============================================================================== 2025-10-09 10:32:53.412404 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 51.02s 2025-10-09 10:32:53.412408 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 40.23s 2025-10-09 10:32:53.412412 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.17s 2025-10-09 10:32:53.412415 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.15s 2025-10-09 10:32:53.412419 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.76s 2025-10-09 10:32:53.412423 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 13.89s 2025-10-09 10:32:53.412426 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.78s 2025-10-09 10:32:53.412430 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.70s 2025-10-09 10:32:53.412434 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.53s 2025-10-09 10:32:53.412437 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.82s 2025-10-09 10:32:53.412441 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.06s 2025-10-09 10:32:53.412445 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.39s 2025-10-09 10:32:53.412449 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.09s 2025-10-09 10:32:53.412452 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.94s 2025-10-09 10:32:53.412456 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.20s 2025-10-09 10:32:53.412460 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.01s 2025-10-09 10:32:53.412463 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.91s 2025-10-09 10:32:53.412469 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.79s 2025-10-09 10:32:53.412473 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 3.69s 2025-10-09 10:32:53.412477 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.64s 2025-10-09 10:32:56.451871 | orchestrator | 2025-10-09 10:32:56 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:32:56.454617 | orchestrator | 2025-10-09 10:32:56 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:32:56.457703 | orchestrator | 2025-10-09 10:32:56 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:32:56.457911 | orchestrator | 2025-10-09 10:32:56 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:32:59.506799 | orchestrator | 2025-10-09 10:32:59 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:32:59.510066 | orchestrator | 2025-10-09 10:32:59 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:32:59.511746 | orchestrator | 2025-10-09 10:32:59 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:32:59.511971 | orchestrator | 2025-10-09 10:32:59 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:02.562647 | orchestrator | 2025-10-09 10:33:02 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:33:02.564864 | orchestrator | 2025-10-09 10:33:02 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:33:02.569776 | orchestrator | 2025-10-09 10:33:02 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:33:02.569799 | orchestrator | 2025-10-09 10:33:02 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:05.624187 | orchestrator | 2025-10-09 10:33:05 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:33:05.624326 | orchestrator | 2025-10-09 10:33:05 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:33:05.625982 | orchestrator | 2025-10-09 10:33:05 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:33:05.626004 | orchestrator | 2025-10-09 10:33:05 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:08.679395 | orchestrator | 2025-10-09 10:33:08 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:33:08.681383 | orchestrator | 2025-10-09 10:33:08 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:33:08.682742 | orchestrator | 2025-10-09 10:33:08 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:33:08.682757 | orchestrator | 2025-10-09 10:33:08 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:11.740468 | orchestrator | 2025-10-09 10:33:11 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:33:11.742597 | orchestrator | 2025-10-09 10:33:11 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:33:11.745093 | orchestrator | 2025-10-09 10:33:11 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:33:11.745329 | orchestrator | 2025-10-09 10:33:11 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:14.788331 | orchestrator | 2025-10-09 10:33:14 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:33:14.790652 | orchestrator | 2025-10-09 10:33:14 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:33:14.792612 | orchestrator | 2025-10-09 10:33:14 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:33:14.793000 | orchestrator | 2025-10-09 10:33:14 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:17.837684 | orchestrator | 2025-10-09 10:33:17 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:33:17.839234 | orchestrator | 2025-10-09 10:33:17 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:33:17.841758 | orchestrator | 2025-10-09 10:33:17 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:33:17.841795 | orchestrator | 2025-10-09 10:33:17 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:20.897385 | orchestrator | 2025-10-09 10:33:20 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:33:20.900549 | orchestrator | 2025-10-09 10:33:20 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:33:20.902735 | orchestrator | 2025-10-09 10:33:20 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:33:20.902760 | orchestrator | 2025-10-09 10:33:20 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:23.943783 | orchestrator | 2025-10-09 10:33:23 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:33:23.944480 | orchestrator | 2025-10-09 10:33:23 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:33:23.946320 | orchestrator | 2025-10-09 10:33:23 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:33:23.946347 | orchestrator | 2025-10-09 10:33:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:26.993957 | orchestrator | 2025-10-09 10:33:26 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:33:26.996366 | orchestrator | 2025-10-09 10:33:26 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:33:26.998755 | orchestrator | 2025-10-09 10:33:26 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:33:26.998779 | orchestrator | 2025-10-09 10:33:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:30.051001 | orchestrator | 2025-10-09 10:33:30 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:33:30.052162 | orchestrator | 2025-10-09 10:33:30 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:33:30.053928 | orchestrator | 2025-10-09 10:33:30 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:33:30.053999 | orchestrator | 2025-10-09 10:33:30 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:33.099800 | orchestrator | 2025-10-09 10:33:33 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:33:33.102445 | orchestrator | 2025-10-09 10:33:33 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:33:33.104402 | orchestrator | 2025-10-09 10:33:33 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:33:33.104427 | orchestrator | 2025-10-09 10:33:33 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:36.145337 | orchestrator | 2025-10-09 10:33:36 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:33:36.146642 | orchestrator | 2025-10-09 10:33:36 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:33:36.148325 | orchestrator | 2025-10-09 10:33:36 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:33:36.148349 | orchestrator | 2025-10-09 10:33:36 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:39.191670 | orchestrator | 2025-10-09 10:33:39 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:33:39.193052 | orchestrator | 2025-10-09 10:33:39 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:33:39.194422 | orchestrator | 2025-10-09 10:33:39 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:33:39.194635 | orchestrator | 2025-10-09 10:33:39 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:42.246482 | orchestrator | 2025-10-09 10:33:42 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:33:42.247413 | orchestrator | 2025-10-09 10:33:42 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:33:42.249613 | orchestrator | 2025-10-09 10:33:42 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:33:42.250144 | orchestrator | 2025-10-09 10:33:42 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:45.299511 | orchestrator | 2025-10-09 10:33:45 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:33:45.299763 | orchestrator | 2025-10-09 10:33:45 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:33:45.301426 | orchestrator | 2025-10-09 10:33:45 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:33:45.301451 | orchestrator | 2025-10-09 10:33:45 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:48.355983 | orchestrator | 2025-10-09 10:33:48 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:33:48.357485 | orchestrator | 2025-10-09 10:33:48 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:33:48.359599 | orchestrator | 2025-10-09 10:33:48 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:33:48.359625 | orchestrator | 2025-10-09 10:33:48 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:51.411480 | orchestrator | 2025-10-09 10:33:51 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:33:51.414391 | orchestrator | 2025-10-09 10:33:51 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:33:51.416751 | orchestrator | 2025-10-09 10:33:51 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:33:51.416807 | orchestrator | 2025-10-09 10:33:51 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:54.465286 | orchestrator | 2025-10-09 10:33:54 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:33:54.466342 | orchestrator | 2025-10-09 10:33:54 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state STARTED 2025-10-09 10:33:54.467925 | orchestrator | 2025-10-09 10:33:54 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:33:54.468264 | orchestrator | 2025-10-09 10:33:54 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:33:57.518352 | orchestrator | 2025-10-09 10:33:57 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:33:57.521361 | orchestrator | 2025-10-09 10:33:57.521415 | orchestrator | 2025-10-09 10:33:57.521435 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:33:57.521453 | orchestrator | 2025-10-09 10:33:57.521471 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:33:57.521488 | orchestrator | Thursday 09 October 2025 10:30:55 +0000 (0:00:00.259) 0:00:00.259 ****** 2025-10-09 10:33:57.521505 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:33:57.521523 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:33:57.521540 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:33:57.521557 | orchestrator | 2025-10-09 10:33:57.521574 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:33:57.521591 | orchestrator | Thursday 09 October 2025 10:30:55 +0000 (0:00:00.283) 0:00:00.542 ****** 2025-10-09 10:33:57.521608 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-10-09 10:33:57.521683 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-10-09 10:33:57.521701 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-10-09 10:33:57.521718 | orchestrator | 2025-10-09 10:33:57.521735 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-10-09 10:33:57.521752 | orchestrator | 2025-10-09 10:33:57.521770 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-10-09 10:33:57.521785 | orchestrator | Thursday 09 October 2025 10:30:56 +0000 (0:00:00.472) 0:00:01.015 ****** 2025-10-09 10:33:57.521803 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:33:57.521820 | orchestrator | 2025-10-09 10:33:57.521836 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-10-09 10:33:57.521852 | orchestrator | Thursday 09 October 2025 10:30:56 +0000 (0:00:00.510) 0:00:01.525 ****** 2025-10-09 10:33:57.521868 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-10-09 10:33:57.521886 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-10-09 10:33:57.521904 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-10-09 10:33:57.521921 | orchestrator | 2025-10-09 10:33:57.521938 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-10-09 10:33:57.521955 | orchestrator | Thursday 09 October 2025 10:30:57 +0000 (0:00:00.699) 0:00:02.225 ****** 2025-10-09 10:33:57.521977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:33:57.522064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:33:57.522108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:33:57.522146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:33:57.522169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:33:57.522196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:33:57.522238 | orchestrator | 2025-10-09 10:33:57.522256 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-10-09 10:33:57.522272 | orchestrator | Thursday 09 October 2025 10:30:59 +0000 (0:00:01.908) 0:00:04.134 ****** 2025-10-09 10:33:57.522288 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:33:57.522305 | orchestrator | 2025-10-09 10:33:57.522320 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-10-09 10:33:57.522337 | orchestrator | Thursday 09 October 2025 10:31:00 +0000 (0:00:00.537) 0:00:04.671 ****** 2025-10-09 10:33:57.522358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:33:57.522379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:33:57.522390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:33:57.522406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:33:57.522424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:33:57.522442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:33:57.522452 | orchestrator | 2025-10-09 10:33:57.522463 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-10-09 10:33:57.522473 | orchestrator | Thursday 09 October 2025 10:31:02 +0000 (0:00:02.739) 0:00:07.410 ****** 2025-10-09 10:33:57.522483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-09 10:33:57.522499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-09 10:33:57.522510 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:33:57.522520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-09 10:33:57.522543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-09 10:33:57.522555 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:33:57.522565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-09 10:33:57.522580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-09 10:33:57.522591 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:33:57.522601 | orchestrator | 2025-10-09 10:33:57.522611 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-10-09 10:33:57.522621 | orchestrator | Thursday 09 October 2025 10:31:04 +0000 (0:00:01.396) 0:00:08.807 ****** 2025-10-09 10:33:57.522636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-09 10:33:57.522654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-09 10:33:57.522665 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:33:57.522675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-09 10:33:57.522690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-09 10:33:57.522700 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:33:57.522710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-10-09 10:33:57.522733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-10-09 10:33:57.522744 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:33:57.522754 | orchestrator | 2025-10-09 10:33:57.522764 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-10-09 10:33:57.522773 | orchestrator | Thursday 09 October 2025 10:31:05 +0000 (0:00:00.926) 0:00:09.733 ****** 2025-10-09 10:33:57.522783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:33:57.522794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:33:57.522809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:33:57.522837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:33:57.522849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:33:57.522860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:33:57.522871 | orchestrator | 2025-10-09 10:33:57.522881 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-10-09 10:33:57.522897 | orchestrator | Thursday 09 October 2025 10:31:07 +0000 (0:00:02.497) 0:00:12.231 ****** 2025-10-09 10:33:57.522907 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:33:57.522917 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:33:57.522926 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:33:57.522936 | orchestrator | 2025-10-09 10:33:57.522946 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-10-09 10:33:57.522960 | orchestrator | Thursday 09 October 2025 10:31:10 +0000 (0:00:02.922) 0:00:15.154 ****** 2025-10-09 10:33:57.522970 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:33:57.522980 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:33:57.522989 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:33:57.522999 | orchestrator | 2025-10-09 10:33:57.523009 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-10-09 10:33:57.523018 | orchestrator | Thursday 09 October 2025 10:31:12 +0000 (0:00:01.965) 0:00:17.119 ****** 2025-10-09 10:33:57.523029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:33:57.523046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'co2025-10-09 10:33:57 | INFO  | Task d173bdf2-63a1-4352-bc0b-3eaf6f106cfe is in state SUCCESS 2025-10-09 10:33:57.523058 | orchestrator | ntainer_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:33:57.523069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-10-09 10:33:57.523079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:33:57.523100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:33:57.523118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-10-09 10:33:57.523129 | orchestrator | 2025-10-09 10:33:57.523139 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-10-09 10:33:57.523148 | orchestrator | Thursday 09 October 2025 10:31:14 +0000 (0:00:02.176) 0:00:19.296 ****** 2025-10-09 10:33:57.523158 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:33:57.523168 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:33:57.523178 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:33:57.523187 | orchestrator | 2025-10-09 10:33:57.523197 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-10-09 10:33:57.523225 | orchestrator | Thursday 09 October 2025 10:31:14 +0000 (0:00:00.346) 0:00:19.642 ****** 2025-10-09 10:33:57.523235 | orchestrator | 2025-10-09 10:33:57.523245 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-10-09 10:33:57.523254 | orchestrator | Thursday 09 October 2025 10:31:15 +0000 (0:00:00.071) 0:00:19.714 ****** 2025-10-09 10:33:57.523264 | orchestrator | 2025-10-09 10:33:57.523274 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-10-09 10:33:57.523283 | orchestrator | Thursday 09 October 2025 10:31:15 +0000 (0:00:00.069) 0:00:19.783 ****** 2025-10-09 10:33:57.523293 | orchestrator | 2025-10-09 10:33:57.523311 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-10-09 10:33:57.523321 | orchestrator | Thursday 09 October 2025 10:31:15 +0000 (0:00:00.085) 0:00:19.869 ****** 2025-10-09 10:33:57.523331 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:33:57.523340 | orchestrator | 2025-10-09 10:33:57.523350 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-10-09 10:33:57.523360 | orchestrator | Thursday 09 October 2025 10:31:15 +0000 (0:00:00.264) 0:00:20.134 ****** 2025-10-09 10:33:57.523370 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:33:57.523379 | orchestrator | 2025-10-09 10:33:57.523389 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-10-09 10:33:57.523399 | orchestrator | Thursday 09 October 2025 10:31:16 +0000 (0:00:00.674) 0:00:20.808 ****** 2025-10-09 10:33:57.523408 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:33:57.523418 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:33:57.523428 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:33:57.523437 | orchestrator | 2025-10-09 10:33:57.523447 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-10-09 10:33:57.523457 | orchestrator | Thursday 09 October 2025 10:32:20 +0000 (0:01:03.993) 0:01:24.801 ****** 2025-10-09 10:33:57.523467 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:33:57.523476 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:33:57.523486 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:33:57.523496 | orchestrator | 2025-10-09 10:33:57.523505 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-10-09 10:33:57.523515 | orchestrator | Thursday 09 October 2025 10:33:45 +0000 (0:01:25.695) 0:02:50.497 ****** 2025-10-09 10:33:57.523525 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:33:57.523534 | orchestrator | 2025-10-09 10:33:57.523548 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-10-09 10:33:57.523558 | orchestrator | Thursday 09 October 2025 10:33:46 +0000 (0:00:00.740) 0:02:51.238 ****** 2025-10-09 10:33:57.523568 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:33:57.523578 | orchestrator | 2025-10-09 10:33:57.523587 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-10-09 10:33:57.523597 | orchestrator | Thursday 09 October 2025 10:33:49 +0000 (0:00:02.488) 0:02:53.727 ****** 2025-10-09 10:33:57.523606 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:33:57.523616 | orchestrator | 2025-10-09 10:33:57.523626 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-10-09 10:33:57.523635 | orchestrator | Thursday 09 October 2025 10:33:51 +0000 (0:00:02.300) 0:02:56.027 ****** 2025-10-09 10:33:57.523645 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:33:57.523655 | orchestrator | 2025-10-09 10:33:57.523664 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-10-09 10:33:57.523674 | orchestrator | Thursday 09 October 2025 10:33:54 +0000 (0:00:02.705) 0:02:58.733 ****** 2025-10-09 10:33:57.523684 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:33:57.523693 | orchestrator | 2025-10-09 10:33:57.523703 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:33:57.523714 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:33:57.523724 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-09 10:33:57.523740 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-09 10:33:57.523750 | orchestrator | 2025-10-09 10:33:57.523760 | orchestrator | 2025-10-09 10:33:57.523769 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:33:57.523779 | orchestrator | Thursday 09 October 2025 10:33:56 +0000 (0:00:02.421) 0:03:01.154 ****** 2025-10-09 10:33:57.523794 | orchestrator | =============================================================================== 2025-10-09 10:33:57.523804 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 85.70s 2025-10-09 10:33:57.523814 | orchestrator | opensearch : Restart opensearch container ------------------------------ 63.99s 2025-10-09 10:33:57.523824 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.92s 2025-10-09 10:33:57.523833 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.74s 2025-10-09 10:33:57.523843 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.71s 2025-10-09 10:33:57.523852 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.50s 2025-10-09 10:33:57.523862 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.49s 2025-10-09 10:33:57.523872 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.42s 2025-10-09 10:33:57.523881 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.30s 2025-10-09 10:33:57.523891 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.18s 2025-10-09 10:33:57.523901 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.97s 2025-10-09 10:33:57.523910 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.91s 2025-10-09 10:33:57.523920 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.40s 2025-10-09 10:33:57.523930 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.93s 2025-10-09 10:33:57.523939 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.74s 2025-10-09 10:33:57.523949 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.70s 2025-10-09 10:33:57.523958 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.67s 2025-10-09 10:33:57.523968 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2025-10-09 10:33:57.523977 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2025-10-09 10:33:57.523987 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2025-10-09 10:33:57.523997 | orchestrator | 2025-10-09 10:33:57 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:33:57.524007 | orchestrator | 2025-10-09 10:33:57 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:00.568060 | orchestrator | 2025-10-09 10:34:00 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:34:00.568354 | orchestrator | 2025-10-09 10:34:00 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:34:00.568451 | orchestrator | 2025-10-09 10:34:00 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:03.613954 | orchestrator | 2025-10-09 10:34:03 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:34:03.615398 | orchestrator | 2025-10-09 10:34:03 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:34:03.615429 | orchestrator | 2025-10-09 10:34:03 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:06.658556 | orchestrator | 2025-10-09 10:34:06 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:34:06.659520 | orchestrator | 2025-10-09 10:34:06 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state STARTED 2025-10-09 10:34:06.659795 | orchestrator | 2025-10-09 10:34:06 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:09.710118 | orchestrator | 2025-10-09 10:34:09 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:34:09.712555 | orchestrator | 2025-10-09 10:34:09 | INFO  | Task 8bc470d3-a0e9-4547-b09d-7673b3aed473 is in state SUCCESS 2025-10-09 10:34:09.714253 | orchestrator | 2025-10-09 10:34:09.714345 | orchestrator | 2025-10-09 10:34:09.714361 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-10-09 10:34:09.714374 | orchestrator | 2025-10-09 10:34:09.714386 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-10-09 10:34:09.714397 | orchestrator | Thursday 09 October 2025 10:30:55 +0000 (0:00:00.098) 0:00:00.098 ****** 2025-10-09 10:34:09.714408 | orchestrator | ok: [localhost] => { 2025-10-09 10:34:09.714421 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-10-09 10:34:09.714432 | orchestrator | } 2025-10-09 10:34:09.714444 | orchestrator | 2025-10-09 10:34:09.714455 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-10-09 10:34:09.714466 | orchestrator | Thursday 09 October 2025 10:30:55 +0000 (0:00:00.057) 0:00:00.155 ****** 2025-10-09 10:34:09.714477 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-10-09 10:34:09.714490 | orchestrator | ...ignoring 2025-10-09 10:34:09.714501 | orchestrator | 2025-10-09 10:34:09.714513 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-10-09 10:34:09.714524 | orchestrator | Thursday 09 October 2025 10:30:58 +0000 (0:00:02.883) 0:00:03.038 ****** 2025-10-09 10:34:09.714536 | orchestrator | skipping: [localhost] 2025-10-09 10:34:09.714547 | orchestrator | 2025-10-09 10:34:09.714558 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-10-09 10:34:09.714569 | orchestrator | Thursday 09 October 2025 10:30:58 +0000 (0:00:00.065) 0:00:03.104 ****** 2025-10-09 10:34:09.714580 | orchestrator | ok: [localhost] 2025-10-09 10:34:09.714591 | orchestrator | 2025-10-09 10:34:09.714602 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:34:09.714613 | orchestrator | 2025-10-09 10:34:09.714623 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:34:09.714634 | orchestrator | Thursday 09 October 2025 10:30:58 +0000 (0:00:00.174) 0:00:03.278 ****** 2025-10-09 10:34:09.714645 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:09.714656 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:34:09.714667 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:34:09.714678 | orchestrator | 2025-10-09 10:34:09.714689 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:34:09.714700 | orchestrator | Thursday 09 October 2025 10:30:59 +0000 (0:00:00.361) 0:00:03.640 ****** 2025-10-09 10:34:09.714711 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-10-09 10:34:09.714723 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-10-09 10:34:09.714734 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-10-09 10:34:09.714745 | orchestrator | 2025-10-09 10:34:09.714756 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-10-09 10:34:09.714767 | orchestrator | 2025-10-09 10:34:09.714778 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-10-09 10:34:09.714789 | orchestrator | Thursday 09 October 2025 10:30:59 +0000 (0:00:00.638) 0:00:04.279 ****** 2025-10-09 10:34:09.714800 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-09 10:34:09.714811 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-10-09 10:34:09.714822 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-10-09 10:34:09.714833 | orchestrator | 2025-10-09 10:34:09.714844 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-10-09 10:34:09.714855 | orchestrator | Thursday 09 October 2025 10:31:00 +0000 (0:00:00.367) 0:00:04.647 ****** 2025-10-09 10:34:09.714866 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:34:09.714879 | orchestrator | 2025-10-09 10:34:09.714890 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-10-09 10:34:09.714930 | orchestrator | Thursday 09 October 2025 10:31:00 +0000 (0:00:00.602) 0:00:05.250 ****** 2025-10-09 10:34:09.714985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-09 10:34:09.715003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-09 10:34:09.715022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-09 10:34:09.715042 | orchestrator | 2025-10-09 10:34:09.715060 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-10-09 10:34:09.715072 | orchestrator | Thursday 09 October 2025 10:31:03 +0000 (0:00:03.194) 0:00:08.444 ****** 2025-10-09 10:34:09.715083 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:09.715095 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:34:09.715106 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:09.715117 | orchestrator | 2025-10-09 10:34:09.715127 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-10-09 10:34:09.715138 | orchestrator | Thursday 09 October 2025 10:31:04 +0000 (0:00:00.750) 0:00:09.195 ****** 2025-10-09 10:34:09.715149 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:09.715160 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:09.715171 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:34:09.715181 | orchestrator | 2025-10-09 10:34:09.715192 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-10-09 10:34:09.715224 | orchestrator | Thursday 09 October 2025 10:31:06 +0000 (0:00:01.581) 0:00:10.777 ****** 2025-10-09 10:34:09.715237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-09 10:34:09.715269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-09 10:34:09.715283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-09 10:34:09.715301 | orchestrator | 2025-10-09 10:34:09.715312 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-10-09 10:34:09.715323 | orchestrator | Thursday 09 October 2025 10:31:10 +0000 (0:00:03.895) 0:00:14.672 ****** 2025-10-09 10:34:09.715334 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:09.715345 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:09.715356 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:34:09.715367 | orchestrator | 2025-10-09 10:34:09.715378 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-10-09 10:34:09.715389 | orchestrator | Thursday 09 October 2025 10:31:11 +0000 (0:00:01.140) 0:00:15.813 ****** 2025-10-09 10:34:09.715400 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:34:09.715411 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:34:09.715421 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:34:09.715432 | orchestrator | 2025-10-09 10:34:09.715443 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-10-09 10:34:09.715453 | orchestrator | Thursday 09 October 2025 10:31:15 +0000 (0:00:04.568) 0:00:20.382 ****** 2025-10-09 10:34:09.715464 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:34:09.715475 | orchestrator | 2025-10-09 10:34:09.715486 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-10-09 10:34:09.715497 | orchestrator | Thursday 09 October 2025 10:31:16 +0000 (0:00:00.539) 0:00:20.921 ****** 2025-10-09 10:34:09.715522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:34:09.715535 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:09.715546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:34:09.715564 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:09.715588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:34:09.715600 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:09.715612 | orchestrator | 2025-10-09 10:34:09.715622 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-10-09 10:34:09.715633 | orchestrator | Thursday 09 October 2025 10:31:20 +0000 (0:00:04.139) 0:00:25.060 ****** 2025-10-09 10:34:09.715645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:34:09.715663 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:09.715686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:34:09.715698 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:09.715710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:34:09.715727 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:09.715738 | orchestrator | 2025-10-09 10:34:09.715749 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-10-09 10:34:09.715759 | orchestrator | Thursday 09 October 2025 10:31:23 +0000 (0:00:03.462) 0:00:28.523 ****** 2025-10-09 10:34:09.715775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:34:09.715788 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:09.715808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:34:09.715833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-10-09 10:34:09.715846 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:09.715856 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:09.715867 | orchestrator | 2025-10-09 10:34:09.715878 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-10-09 10:34:09.715889 | orchestrator | Thursday 09 October 2025 10:31:26 +0000 (0:00:02.961) 0:00:31.485 ****** 2025-10-09 10:34:09.715909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/l2025-10-09 10:34:09 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:09.715955 | orchestrator | ocaltime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-09 10:34:09.715975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-09 10:34:09.716002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-10-09 10:34:09.716021 | orchestrator | 2025-10-09 10:34:09.716032 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-10-09 10:34:09.716043 | orchestrator | Thursday 09 October 2025 10:31:30 +0000 (0:00:03.624) 0:00:35.110 ****** 2025-10-09 10:34:09.716054 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:34:09.716065 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:34:09.716076 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:34:09.716086 | orchestrator | 2025-10-09 10:34:09.716097 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-10-09 10:34:09.716108 | orchestrator | Thursday 09 October 2025 10:31:31 +0000 (0:00:00.979) 0:00:36.089 ****** 2025-10-09 10:34:09.716119 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:09.716130 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:34:09.716141 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:34:09.716152 | orchestrator | 2025-10-09 10:34:09.716163 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-10-09 10:34:09.716173 | orchestrator | Thursday 09 October 2025 10:31:32 +0000 (0:00:00.860) 0:00:36.949 ****** 2025-10-09 10:34:09.716184 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:09.716195 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:34:09.716227 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:34:09.716238 | orchestrator | 2025-10-09 10:34:09.716249 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-10-09 10:34:09.716260 | orchestrator | Thursday 09 October 2025 10:31:32 +0000 (0:00:00.343) 0:00:37.293 ****** 2025-10-09 10:34:09.716272 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-10-09 10:34:09.716283 | orchestrator | ...ignoring 2025-10-09 10:34:09.716294 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-10-09 10:34:09.716305 | orchestrator | ...ignoring 2025-10-09 10:34:09.716316 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-10-09 10:34:09.716327 | orchestrator | ...ignoring 2025-10-09 10:34:09.716338 | orchestrator | 2025-10-09 10:34:09.716349 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-10-09 10:34:09.716360 | orchestrator | Thursday 09 October 2025 10:31:43 +0000 (0:00:11.004) 0:00:48.297 ****** 2025-10-09 10:34:09.716371 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:09.716382 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:34:09.716393 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:34:09.716404 | orchestrator | 2025-10-09 10:34:09.716415 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-10-09 10:34:09.716426 | orchestrator | Thursday 09 October 2025 10:31:44 +0000 (0:00:00.495) 0:00:48.793 ****** 2025-10-09 10:34:09.716437 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:09.716448 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:09.716458 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:09.716469 | orchestrator | 2025-10-09 10:34:09.716480 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-10-09 10:34:09.716491 | orchestrator | Thursday 09 October 2025 10:31:44 +0000 (0:00:00.686) 0:00:49.480 ****** 2025-10-09 10:34:09.716508 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:09.716519 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:09.716530 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:09.716541 | orchestrator | 2025-10-09 10:34:09.716552 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-10-09 10:34:09.716567 | orchestrator | Thursday 09 October 2025 10:31:45 +0000 (0:00:00.488) 0:00:49.968 ****** 2025-10-09 10:34:09.716578 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:09.716589 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:09.716600 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:09.716611 | orchestrator | 2025-10-09 10:34:09.716622 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-10-09 10:34:09.716633 | orchestrator | Thursday 09 October 2025 10:31:45 +0000 (0:00:00.494) 0:00:50.463 ****** 2025-10-09 10:34:09.716644 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:09.716655 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:34:09.716666 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:34:09.716677 | orchestrator | 2025-10-09 10:34:09.716688 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-10-09 10:34:09.716705 | orchestrator | Thursday 09 October 2025 10:31:46 +0000 (0:00:00.438) 0:00:50.901 ****** 2025-10-09 10:34:09.716717 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:09.716728 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:09.716739 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:09.716750 | orchestrator | 2025-10-09 10:34:09.716761 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-10-09 10:34:09.716772 | orchestrator | Thursday 09 October 2025 10:31:47 +0000 (0:00:00.734) 0:00:51.636 ****** 2025-10-09 10:34:09.716783 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:09.716794 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:09.716805 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-10-09 10:34:09.716816 | orchestrator | 2025-10-09 10:34:09.716827 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-10-09 10:34:09.716838 | orchestrator | Thursday 09 October 2025 10:31:47 +0000 (0:00:00.401) 0:00:52.038 ****** 2025-10-09 10:34:09.716849 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:34:09.716859 | orchestrator | 2025-10-09 10:34:09.716870 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-10-09 10:34:09.716881 | orchestrator | Thursday 09 October 2025 10:31:58 +0000 (0:00:10.738) 0:01:02.777 ****** 2025-10-09 10:34:09.716892 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:09.716903 | orchestrator | 2025-10-09 10:34:09.716914 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-10-09 10:34:09.716925 | orchestrator | Thursday 09 October 2025 10:31:58 +0000 (0:00:00.143) 0:01:02.920 ****** 2025-10-09 10:34:09.716936 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:09.716947 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:09.716958 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:09.716969 | orchestrator | 2025-10-09 10:34:09.716980 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-10-09 10:34:09.716991 | orchestrator | Thursday 09 October 2025 10:31:59 +0000 (0:00:01.035) 0:01:03.956 ****** 2025-10-09 10:34:09.717002 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:34:09.717012 | orchestrator | 2025-10-09 10:34:09.717023 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-10-09 10:34:09.717034 | orchestrator | Thursday 09 October 2025 10:32:07 +0000 (0:00:08.250) 0:01:12.206 ****** 2025-10-09 10:34:09.717045 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:09.717056 | orchestrator | 2025-10-09 10:34:09.717067 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-10-09 10:34:09.717078 | orchestrator | Thursday 09 October 2025 10:32:09 +0000 (0:00:01.648) 0:01:13.855 ****** 2025-10-09 10:34:09.717089 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:09.717106 | orchestrator | 2025-10-09 10:34:09.717117 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-10-09 10:34:09.717128 | orchestrator | Thursday 09 October 2025 10:32:11 +0000 (0:00:02.618) 0:01:16.473 ****** 2025-10-09 10:34:09.717139 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:34:09.717150 | orchestrator | 2025-10-09 10:34:09.717161 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-10-09 10:34:09.717172 | orchestrator | Thursday 09 October 2025 10:32:12 +0000 (0:00:00.149) 0:01:16.623 ****** 2025-10-09 10:34:09.717183 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:09.717194 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:09.717221 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:09.717232 | orchestrator | 2025-10-09 10:34:09.717243 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-10-09 10:34:09.717254 | orchestrator | Thursday 09 October 2025 10:32:12 +0000 (0:00:00.324) 0:01:16.948 ****** 2025-10-09 10:34:09.717265 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:09.717276 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-10-09 10:34:09.717287 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:34:09.717298 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:34:09.717309 | orchestrator | 2025-10-09 10:34:09.717320 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-10-09 10:34:09.717331 | orchestrator | skipping: no hosts matched 2025-10-09 10:34:09.717342 | orchestrator | 2025-10-09 10:34:09.717352 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-10-09 10:34:09.717364 | orchestrator | 2025-10-09 10:34:09.717374 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-10-09 10:34:09.717385 | orchestrator | Thursday 09 October 2025 10:32:12 +0000 (0:00:00.586) 0:01:17.534 ****** 2025-10-09 10:34:09.717396 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:34:09.717407 | orchestrator | 2025-10-09 10:34:09.717418 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-10-09 10:34:09.717429 | orchestrator | Thursday 09 October 2025 10:32:30 +0000 (0:00:17.699) 0:01:35.234 ****** 2025-10-09 10:34:09.717440 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:34:09.717451 | orchestrator | 2025-10-09 10:34:09.717462 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-10-09 10:34:09.717473 | orchestrator | Thursday 09 October 2025 10:32:51 +0000 (0:00:20.627) 0:01:55.861 ****** 2025-10-09 10:34:09.717483 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:34:09.717494 | orchestrator | 2025-10-09 10:34:09.717505 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-10-09 10:34:09.717516 | orchestrator | 2025-10-09 10:34:09.717527 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-10-09 10:34:09.717542 | orchestrator | Thursday 09 October 2025 10:32:53 +0000 (0:00:02.546) 0:01:58.408 ****** 2025-10-09 10:34:09.717554 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:34:09.717565 | orchestrator | 2025-10-09 10:34:09.717576 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-10-09 10:34:09.717587 | orchestrator | Thursday 09 October 2025 10:33:12 +0000 (0:00:19.078) 0:02:17.486 ****** 2025-10-09 10:34:09.717597 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:34:09.717608 | orchestrator | 2025-10-09 10:34:09.717619 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-10-09 10:34:09.717630 | orchestrator | Thursday 09 October 2025 10:33:33 +0000 (0:00:20.658) 0:02:38.145 ****** 2025-10-09 10:34:09.717641 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:34:09.717652 | orchestrator | 2025-10-09 10:34:09.717669 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-10-09 10:34:09.717681 | orchestrator | 2025-10-09 10:34:09.717692 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-10-09 10:34:09.717703 | orchestrator | Thursday 09 October 2025 10:33:36 +0000 (0:00:02.683) 0:02:40.828 ****** 2025-10-09 10:34:09.717720 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:34:09.717731 | orchestrator | 2025-10-09 10:34:09.717742 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-10-09 10:34:09.717753 | orchestrator | Thursday 09 October 2025 10:33:48 +0000 (0:00:12.574) 0:02:53.402 ****** 2025-10-09 10:34:09.717764 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:09.717775 | orchestrator | 2025-10-09 10:34:09.717786 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-10-09 10:34:09.717797 | orchestrator | Thursday 09 October 2025 10:33:53 +0000 (0:00:04.607) 0:02:58.010 ****** 2025-10-09 10:34:09.717808 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:09.717819 | orchestrator | 2025-10-09 10:34:09.717830 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-10-09 10:34:09.717841 | orchestrator | 2025-10-09 10:34:09.717852 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-10-09 10:34:09.717863 | orchestrator | Thursday 09 October 2025 10:33:56 +0000 (0:00:02.813) 0:03:00.823 ****** 2025-10-09 10:34:09.717874 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:34:09.717885 | orchestrator | 2025-10-09 10:34:09.717896 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-10-09 10:34:09.717907 | orchestrator | Thursday 09 October 2025 10:33:56 +0000 (0:00:00.563) 0:03:01.387 ****** 2025-10-09 10:34:09.717918 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:09.717930 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:09.717941 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:34:09.717952 | orchestrator | 2025-10-09 10:34:09.717963 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-10-09 10:34:09.717974 | orchestrator | Thursday 09 October 2025 10:33:59 +0000 (0:00:02.293) 0:03:03.680 ****** 2025-10-09 10:34:09.717985 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:09.717996 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:09.718007 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:34:09.718174 | orchestrator | 2025-10-09 10:34:09.718187 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-10-09 10:34:09.718258 | orchestrator | Thursday 09 October 2025 10:34:01 +0000 (0:00:02.291) 0:03:05.972 ****** 2025-10-09 10:34:09.718272 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:09.718284 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:09.718295 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:34:09.718305 | orchestrator | 2025-10-09 10:34:09.718316 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-10-09 10:34:09.718327 | orchestrator | Thursday 09 October 2025 10:34:03 +0000 (0:00:02.200) 0:03:08.173 ****** 2025-10-09 10:34:09.718338 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:09.718349 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:09.718360 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:34:09.718371 | orchestrator | 2025-10-09 10:34:09.718381 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-10-09 10:34:09.718392 | orchestrator | Thursday 09 October 2025 10:34:05 +0000 (0:00:02.186) 0:03:10.359 ****** 2025-10-09 10:34:09.718403 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:34:09.718414 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:34:09.718425 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:34:09.718435 | orchestrator | 2025-10-09 10:34:09.718446 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-10-09 10:34:09.718457 | orchestrator | Thursday 09 October 2025 10:34:09 +0000 (0:00:03.275) 0:03:13.634 ****** 2025-10-09 10:34:09.718468 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:34:09.718479 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:34:09.718490 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:34:09.718501 | orchestrator | 2025-10-09 10:34:09.718511 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:34:09.718522 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-10-09 10:34:09.718542 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-10-09 10:34:09.718555 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-10-09 10:34:09.718566 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-10-09 10:34:09.718577 | orchestrator | 2025-10-09 10:34:09.718587 | orchestrator | 2025-10-09 10:34:09.718596 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:34:09.718606 | orchestrator | Thursday 09 October 2025 10:34:09 +0000 (0:00:00.245) 0:03:13.879 ****** 2025-10-09 10:34:09.718621 | orchestrator | =============================================================================== 2025-10-09 10:34:09.718631 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.29s 2025-10-09 10:34:09.718641 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 36.78s 2025-10-09 10:34:09.718650 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.57s 2025-10-09 10:34:09.718660 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.00s 2025-10-09 10:34:09.718669 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.74s 2025-10-09 10:34:09.718686 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.25s 2025-10-09 10:34:09.718697 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.23s 2025-10-09 10:34:09.718706 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.61s 2025-10-09 10:34:09.718716 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.57s 2025-10-09 10:34:09.718726 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.14s 2025-10-09 10:34:09.718735 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.90s 2025-10-09 10:34:09.718745 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.62s 2025-10-09 10:34:09.718754 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.46s 2025-10-09 10:34:09.718764 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.28s 2025-10-09 10:34:09.718773 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.19s 2025-10-09 10:34:09.718783 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.96s 2025-10-09 10:34:09.718793 | orchestrator | Check MariaDB service --------------------------------------------------- 2.88s 2025-10-09 10:34:09.718802 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.81s 2025-10-09 10:34:09.718812 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.62s 2025-10-09 10:34:09.718821 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.29s 2025-10-09 10:34:12.773128 | orchestrator | 2025-10-09 10:34:12 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:34:12.776246 | orchestrator | 2025-10-09 10:34:12 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:34:12.777748 | orchestrator | 2025-10-09 10:34:12 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:34:12.777773 | orchestrator | 2025-10-09 10:34:12 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:15.833794 | orchestrator | 2025-10-09 10:34:15 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:34:15.833988 | orchestrator | 2025-10-09 10:34:15 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:34:15.835126 | orchestrator | 2025-10-09 10:34:15 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:34:15.835153 | orchestrator | 2025-10-09 10:34:15 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:18.887983 | orchestrator | 2025-10-09 10:34:18 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:34:18.892701 | orchestrator | 2025-10-09 10:34:18 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:34:18.894416 | orchestrator | 2025-10-09 10:34:18 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:34:18.895443 | orchestrator | 2025-10-09 10:34:18 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:21.947973 | orchestrator | 2025-10-09 10:34:21 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:34:21.948907 | orchestrator | 2025-10-09 10:34:21 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:34:21.950620 | orchestrator | 2025-10-09 10:34:21 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:34:21.950646 | orchestrator | 2025-10-09 10:34:21 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:25.000761 | orchestrator | 2025-10-09 10:34:24 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:34:25.001792 | orchestrator | 2025-10-09 10:34:24 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:34:25.002989 | orchestrator | 2025-10-09 10:34:25 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:34:25.003014 | orchestrator | 2025-10-09 10:34:25 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:28.043855 | orchestrator | 2025-10-09 10:34:28 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:34:28.048482 | orchestrator | 2025-10-09 10:34:28 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:34:28.050527 | orchestrator | 2025-10-09 10:34:28 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:34:28.050551 | orchestrator | 2025-10-09 10:34:28 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:31.089568 | orchestrator | 2025-10-09 10:34:31 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:34:31.089830 | orchestrator | 2025-10-09 10:34:31 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:34:31.092794 | orchestrator | 2025-10-09 10:34:31 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:34:31.092833 | orchestrator | 2025-10-09 10:34:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:34.135527 | orchestrator | 2025-10-09 10:34:34 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:34:34.136622 | orchestrator | 2025-10-09 10:34:34 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:34:34.137627 | orchestrator | 2025-10-09 10:34:34 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:34:34.137652 | orchestrator | 2025-10-09 10:34:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:37.186702 | orchestrator | 2025-10-09 10:34:37 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:34:37.186921 | orchestrator | 2025-10-09 10:34:37 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:34:37.187749 | orchestrator | 2025-10-09 10:34:37 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:34:37.187804 | orchestrator | 2025-10-09 10:34:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:40.245083 | orchestrator | 2025-10-09 10:34:40 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:34:40.248031 | orchestrator | 2025-10-09 10:34:40 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:34:40.249173 | orchestrator | 2025-10-09 10:34:40 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:34:40.249222 | orchestrator | 2025-10-09 10:34:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:43.280936 | orchestrator | 2025-10-09 10:34:43 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:34:43.281585 | orchestrator | 2025-10-09 10:34:43 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:34:43.285004 | orchestrator | 2025-10-09 10:34:43 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:34:43.285305 | orchestrator | 2025-10-09 10:34:43 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:46.335535 | orchestrator | 2025-10-09 10:34:46 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:34:46.336185 | orchestrator | 2025-10-09 10:34:46 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:34:46.336845 | orchestrator | 2025-10-09 10:34:46 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:34:46.336868 | orchestrator | 2025-10-09 10:34:46 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:49.373309 | orchestrator | 2025-10-09 10:34:49 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:34:49.373629 | orchestrator | 2025-10-09 10:34:49 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:34:49.374220 | orchestrator | 2025-10-09 10:34:49 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:34:49.374234 | orchestrator | 2025-10-09 10:34:49 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:52.429064 | orchestrator | 2025-10-09 10:34:52 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:34:52.429974 | orchestrator | 2025-10-09 10:34:52 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:34:52.432322 | orchestrator | 2025-10-09 10:34:52 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:34:52.432781 | orchestrator | 2025-10-09 10:34:52 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:55.475939 | orchestrator | 2025-10-09 10:34:55 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:34:55.479417 | orchestrator | 2025-10-09 10:34:55 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:34:55.481438 | orchestrator | 2025-10-09 10:34:55 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:34:55.481576 | orchestrator | 2025-10-09 10:34:55 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:34:58.529757 | orchestrator | 2025-10-09 10:34:58 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:34:58.531019 | orchestrator | 2025-10-09 10:34:58 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:34:58.532850 | orchestrator | 2025-10-09 10:34:58 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:34:58.532933 | orchestrator | 2025-10-09 10:34:58 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:01.583642 | orchestrator | 2025-10-09 10:35:01 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:35:01.585328 | orchestrator | 2025-10-09 10:35:01 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:35:01.587232 | orchestrator | 2025-10-09 10:35:01 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:35:01.587259 | orchestrator | 2025-10-09 10:35:01 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:04.636039 | orchestrator | 2025-10-09 10:35:04 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:35:04.639532 | orchestrator | 2025-10-09 10:35:04 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state STARTED 2025-10-09 10:35:04.641275 | orchestrator | 2025-10-09 10:35:04 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:35:04.641300 | orchestrator | 2025-10-09 10:35:04 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:07.680413 | orchestrator | 2025-10-09 10:35:07 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:35:07.683513 | orchestrator | 2025-10-09 10:35:07 | INFO  | Task d3ecf2cb-0e3b-484a-bb04-35e28e61ec9a is in state SUCCESS 2025-10-09 10:35:07.685806 | orchestrator | 2025-10-09 10:35:07.685837 | orchestrator | 2025-10-09 10:35:07.685848 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-10-09 10:35:07.685858 | orchestrator | 2025-10-09 10:35:07.685869 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-10-09 10:35:07.685879 | orchestrator | Thursday 09 October 2025 10:32:56 +0000 (0:00:00.642) 0:00:00.642 ****** 2025-10-09 10:35:07.685889 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:35:07.685899 | orchestrator | 2025-10-09 10:35:07.685909 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-10-09 10:35:07.685919 | orchestrator | Thursday 09 October 2025 10:32:57 +0000 (0:00:00.705) 0:00:01.347 ****** 2025-10-09 10:35:07.685930 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:35:07.685941 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:35:07.685951 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:35:07.685961 | orchestrator | 2025-10-09 10:35:07.685971 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-10-09 10:35:07.685981 | orchestrator | Thursday 09 October 2025 10:32:58 +0000 (0:00:00.625) 0:00:01.972 ****** 2025-10-09 10:35:07.685991 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:35:07.686001 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:35:07.686010 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:35:07.686140 | orchestrator | 2025-10-09 10:35:07.686151 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-10-09 10:35:07.686161 | orchestrator | Thursday 09 October 2025 10:32:58 +0000 (0:00:00.297) 0:00:02.270 ****** 2025-10-09 10:35:07.686171 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:35:07.686181 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:35:07.686190 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:35:07.686225 | orchestrator | 2025-10-09 10:35:07.686250 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-10-09 10:35:07.686260 | orchestrator | Thursday 09 October 2025 10:32:59 +0000 (0:00:00.913) 0:00:03.183 ****** 2025-10-09 10:35:07.686270 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:35:07.686279 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:35:07.686289 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:35:07.686298 | orchestrator | 2025-10-09 10:35:07.686308 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-10-09 10:35:07.686318 | orchestrator | Thursday 09 October 2025 10:32:59 +0000 (0:00:00.328) 0:00:03.512 ****** 2025-10-09 10:35:07.686349 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:35:07.686397 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:35:07.686410 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:35:07.686422 | orchestrator | 2025-10-09 10:35:07.686433 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-10-09 10:35:07.686444 | orchestrator | Thursday 09 October 2025 10:32:59 +0000 (0:00:00.305) 0:00:03.817 ****** 2025-10-09 10:35:07.686456 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:35:07.686536 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:35:07.686548 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:35:07.686559 | orchestrator | 2025-10-09 10:35:07.686571 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-10-09 10:35:07.686582 | orchestrator | Thursday 09 October 2025 10:33:00 +0000 (0:00:00.343) 0:00:04.161 ****** 2025-10-09 10:35:07.686593 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.686605 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:35:07.686627 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:35:07.686638 | orchestrator | 2025-10-09 10:35:07.686649 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-10-09 10:35:07.686661 | orchestrator | Thursday 09 October 2025 10:33:00 +0000 (0:00:00.543) 0:00:04.705 ****** 2025-10-09 10:35:07.686672 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:35:07.686683 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:35:07.686694 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:35:07.686705 | orchestrator | 2025-10-09 10:35:07.686716 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-10-09 10:35:07.686727 | orchestrator | Thursday 09 October 2025 10:33:01 +0000 (0:00:00.315) 0:00:05.020 ****** 2025-10-09 10:35:07.686739 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-10-09 10:35:07.686750 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-09 10:35:07.686760 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-09 10:35:07.686770 | orchestrator | 2025-10-09 10:35:07.686779 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-10-09 10:35:07.686789 | orchestrator | Thursday 09 October 2025 10:33:01 +0000 (0:00:00.722) 0:00:05.743 ****** 2025-10-09 10:35:07.686799 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:35:07.686808 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:35:07.686818 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:35:07.686828 | orchestrator | 2025-10-09 10:35:07.686837 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-10-09 10:35:07.686847 | orchestrator | Thursday 09 October 2025 10:33:02 +0000 (0:00:00.525) 0:00:06.268 ****** 2025-10-09 10:35:07.686857 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-10-09 10:35:07.686866 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-09 10:35:07.686876 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-09 10:35:07.686886 | orchestrator | 2025-10-09 10:35:07.686895 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-10-09 10:35:07.686905 | orchestrator | Thursday 09 October 2025 10:33:04 +0000 (0:00:02.276) 0:00:08.545 ****** 2025-10-09 10:35:07.686915 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-10-09 10:35:07.686925 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-10-09 10:35:07.686935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-10-09 10:35:07.686945 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.686954 | orchestrator | 2025-10-09 10:35:07.686964 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-10-09 10:35:07.686986 | orchestrator | Thursday 09 October 2025 10:33:05 +0000 (0:00:00.688) 0:00:09.234 ****** 2025-10-09 10:35:07.686998 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.687019 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.687029 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.687040 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.687050 | orchestrator | 2025-10-09 10:35:07.687060 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-10-09 10:35:07.687069 | orchestrator | Thursday 09 October 2025 10:33:06 +0000 (0:00:01.004) 0:00:10.238 ****** 2025-10-09 10:35:07.687081 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.687092 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.687107 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.687118 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.687128 | orchestrator | 2025-10-09 10:35:07.687138 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-10-09 10:35:07.687147 | orchestrator | Thursday 09 October 2025 10:33:06 +0000 (0:00:00.377) 0:00:10.615 ****** 2025-10-09 10:35:07.687159 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'ee731b6bd0ca', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-10-09 10:33:03.065831', 'end': '2025-10-09 10:33:03.115616', 'delta': '0:00:00.049785', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ee731b6bd0ca'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-10-09 10:35:07.687172 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2a822fba2af6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-10-09 10:33:03.877254', 'end': '2025-10-09 10:33:03.936680', 'delta': '0:00:00.059426', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a822fba2af6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-10-09 10:35:07.687211 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'ee28ecce7629', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-10-09 10:33:04.493705', 'end': '2025-10-09 10:33:04.537235', 'delta': '0:00:00.043530', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ee28ecce7629'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-10-09 10:35:07.687223 | orchestrator | 2025-10-09 10:35:07.687233 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-10-09 10:35:07.687243 | orchestrator | Thursday 09 October 2025 10:33:06 +0000 (0:00:00.213) 0:00:10.829 ****** 2025-10-09 10:35:07.687253 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:35:07.687263 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:35:07.687272 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:35:07.687282 | orchestrator | 2025-10-09 10:35:07.687292 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-10-09 10:35:07.687302 | orchestrator | Thursday 09 October 2025 10:33:07 +0000 (0:00:00.461) 0:00:11.291 ****** 2025-10-09 10:35:07.687312 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-10-09 10:35:07.687321 | orchestrator | 2025-10-09 10:35:07.687331 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-10-09 10:35:07.687341 | orchestrator | Thursday 09 October 2025 10:33:09 +0000 (0:00:01.684) 0:00:12.975 ****** 2025-10-09 10:35:07.687351 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.687361 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:35:07.687370 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:35:07.687380 | orchestrator | 2025-10-09 10:35:07.687390 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-10-09 10:35:07.687399 | orchestrator | Thursday 09 October 2025 10:33:09 +0000 (0:00:00.321) 0:00:13.297 ****** 2025-10-09 10:35:07.687409 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.687419 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:35:07.687429 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:35:07.687438 | orchestrator | 2025-10-09 10:35:07.687448 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-10-09 10:35:07.687458 | orchestrator | Thursday 09 October 2025 10:33:09 +0000 (0:00:00.416) 0:00:13.713 ****** 2025-10-09 10:35:07.687467 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.687477 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:35:07.687487 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:35:07.687497 | orchestrator | 2025-10-09 10:35:07.687507 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-10-09 10:35:07.687516 | orchestrator | Thursday 09 October 2025 10:33:10 +0000 (0:00:00.537) 0:00:14.251 ****** 2025-10-09 10:35:07.687526 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:35:07.687536 | orchestrator | 2025-10-09 10:35:07.687550 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-10-09 10:35:07.687560 | orchestrator | Thursday 09 October 2025 10:33:10 +0000 (0:00:00.186) 0:00:14.437 ****** 2025-10-09 10:35:07.687570 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.687580 | orchestrator | 2025-10-09 10:35:07.687590 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-10-09 10:35:07.687599 | orchestrator | Thursday 09 October 2025 10:33:10 +0000 (0:00:00.338) 0:00:14.775 ****** 2025-10-09 10:35:07.687609 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.687619 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:35:07.687634 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:35:07.687643 | orchestrator | 2025-10-09 10:35:07.687653 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-10-09 10:35:07.687663 | orchestrator | Thursday 09 October 2025 10:33:11 +0000 (0:00:00.301) 0:00:15.077 ****** 2025-10-09 10:35:07.687673 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.687683 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:35:07.687693 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:35:07.687702 | orchestrator | 2025-10-09 10:35:07.687712 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-10-09 10:35:07.687722 | orchestrator | Thursday 09 October 2025 10:33:11 +0000 (0:00:00.332) 0:00:15.410 ****** 2025-10-09 10:35:07.687732 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.687742 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:35:07.687751 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:35:07.687761 | orchestrator | 2025-10-09 10:35:07.687771 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-10-09 10:35:07.687781 | orchestrator | Thursday 09 October 2025 10:33:12 +0000 (0:00:00.521) 0:00:15.931 ****** 2025-10-09 10:35:07.687790 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.687800 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:35:07.687810 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:35:07.687819 | orchestrator | 2025-10-09 10:35:07.687829 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-10-09 10:35:07.687839 | orchestrator | Thursday 09 October 2025 10:33:12 +0000 (0:00:00.352) 0:00:16.284 ****** 2025-10-09 10:35:07.687849 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.687858 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:35:07.687868 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:35:07.687878 | orchestrator | 2025-10-09 10:35:07.687888 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-10-09 10:35:07.687898 | orchestrator | Thursday 09 October 2025 10:33:12 +0000 (0:00:00.376) 0:00:16.661 ****** 2025-10-09 10:35:07.687908 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.687917 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:35:07.687927 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:35:07.687937 | orchestrator | 2025-10-09 10:35:07.687947 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-10-09 10:35:07.687962 | orchestrator | Thursday 09 October 2025 10:33:13 +0000 (0:00:00.308) 0:00:16.970 ****** 2025-10-09 10:35:07.687972 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.687981 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:35:07.687991 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:35:07.688001 | orchestrator | 2025-10-09 10:35:07.688010 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-10-09 10:35:07.688020 | orchestrator | Thursday 09 October 2025 10:33:13 +0000 (0:00:00.544) 0:00:17.514 ****** 2025-10-09 10:35:07.688043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0cbdaba5--e3a8--55ff--9207--33249002ea74-osd--block--0cbdaba5--e3a8--55ff--9207--33249002ea74', 'dm-uuid-LVM-ExvMc93TaGMjWOqGPvd34m2gk1t4oCUOJ6FpMp03P2VtxgLz5RAEn3Dnels013gF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b8397ec--b473--5fab--a988--270c3fd4ebb0-osd--block--0b8397ec--b473--5fab--a988--270c3fd4ebb0', 'dm-uuid-LVM-Iu2PpFdOBa8teqvWKcbfD2Pd2CSRQtEGYNoKuoXUgYBbYH60uaRabo8PgEzus6ML'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843', 'scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part1', 'scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part14', 'scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part15', 'scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part16', 'scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:35:07.688237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0cbdaba5--e3a8--55ff--9207--33249002ea74-osd--block--0cbdaba5--e3a8--55ff--9207--33249002ea74'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uLdo5O-ec3P-ApYI-bZen-ZZ3F-BeNc-Ki292o', 'scsi-0QEMU_QEMU_HARDDISK_919b2ed4-de3e-4423-bde9-ac7f73558c8d', 'scsi-SQEMU_QEMU_HARDDISK_919b2ed4-de3e-4423-bde9-ac7f73558c8d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:35:07.688256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bec6f5a4--3c2e--53c4--9bd6--39a84a6eb9ee-osd--block--bec6f5a4--3c2e--53c4--9bd6--39a84a6eb9ee', 'dm-uuid-LVM-au3ljzSANb0tyOeMGUgRFh2fQv14LQXqLqkTr72pBhnrUNTZZIjkiDu5w36Kbbq4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0b8397ec--b473--5fab--a988--270c3fd4ebb0-osd--block--0b8397ec--b473--5fab--a988--270c3fd4ebb0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lRzkGE-YzPM-XiG6-y68a-feZr-FiG0-MdFMqH', 'scsi-0QEMU_QEMU_HARDDISK_ea7d1eca-dc5e-463e-aff8-492469dc7c84', 'scsi-SQEMU_QEMU_HARDDISK_ea7d1eca-dc5e-463e-aff8-492469dc7c84'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:35:07.688284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2df43997-ce38-41a3-953f-7189c0799c6e', 'scsi-SQEMU_QEMU_HARDDISK_2df43997-ce38-41a3-953f-7189c0799c6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:35:07.688299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--db411f8a--05b0--54f7--b748--fd517a3c676f-osd--block--db411f8a--05b0--54f7--b748--fd517a3c676f', 'dm-uuid-LVM-jpDQBe8QHm1K0O9IsCStmbdH56NsHtzn6CQJMN93ZeGkA72L0LSc1QtKsLj0mgLC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:35:07.688320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688366 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6', 'scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part1', 'scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part14', 'scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part15', 'scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part16', 'scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:35:07.688441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bec6f5a4--3c2e--53c4--9bd6--39a84a6eb9ee-osd--block--bec6f5a4--3c2e--53c4--9bd6--39a84a6eb9ee'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-K4lFF5-75vb-ZsLJ-bPXw-JnwN-3ljd-cE9Yz9', 'scsi-0QEMU_QEMU_HARDDISK_9e7febf8-8ec8-4679-b2bb-f3ad59f2c20d', 'scsi-SQEMU_QEMU_HARDDISK_9e7febf8-8ec8-4679-b2bb-f3ad59f2c20d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:35:07.688457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--db411f8a--05b0--54f7--b748--fd517a3c676f-osd--block--db411f8a--05b0--54f7--b748--fd517a3c676f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FC0Iqg-a508-XVdl-dcs1-Lm9g-TMVF-jFQuVZ', 'scsi-0QEMU_QEMU_HARDDISK_fd778c69-d4e8-41af-bc93-131a1dca1168', 'scsi-SQEMU_QEMU_HARDDISK_fd778c69-d4e8-41af-bc93-131a1dca1168'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:35:07.688472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96a31b72-79c3-475c-a7fa-14d6a4c6c9b3', 'scsi-SQEMU_QEMU_HARDDISK_96a31b72-79c3-475c-a7fa-14d6a4c6c9b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:35:07.688483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:35:07.688493 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.688503 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:35:07.688513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--83d577c9--ff1a--5f1d--bd0e--44f99d742f78-osd--block--83d577c9--ff1a--5f1d--bd0e--44f99d742f78', 'dm-uuid-LVM-gxlCTZ5efJTHi74imUaaLMcOdZC9sz722geQT9GSu5DHYJXvaxEnu8fZKsMeh9uX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ce20a60--fba3--5536--8b48--1e48c039a9b4-osd--block--8ce20a60--fba3--5536--8b48--1e48c039a9b4', 'dm-uuid-LVM-2wowwvZuu9v58jhoRFdOjaRASbwQw8Dt4MH44vTkY6o4LKxALHPQgNRz4cfwq14j'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688913 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688923 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688948 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688969 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688979 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-10-09 10:35:07.688998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0', 'scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part1', 'scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part14', 'scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part15', 'scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part16', 'scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:35:07.689019 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--83d577c9--ff1a--5f1d--bd0e--44f99d742f78-osd--block--83d577c9--ff1a--5f1d--bd0e--44f99d742f78'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FroQeu-Fzqs-f6jq-MAe8-Csas-tQEp-PNWjna', 'scsi-0QEMU_QEMU_HARDDISK_6ad7b454-0b43-4b47-a404-c2fa6c30a397', 'scsi-SQEMU_QEMU_HARDDISK_6ad7b454-0b43-4b47-a404-c2fa6c30a397'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:35:07.689030 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8ce20a60--fba3--5536--8b48--1e48c039a9b4-osd--block--8ce20a60--fba3--5536--8b48--1e48c039a9b4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EkQEBW-S5T8-FI78-gRGq-CMjx-7hxO-72EIVH', 'scsi-0QEMU_QEMU_HARDDISK_46e0cf8b-6c4d-4615-bce2-a8b81f113425', 'scsi-SQEMU_QEMU_HARDDISK_46e0cf8b-6c4d-4615-bce2-a8b81f113425'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:35:07.689041 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_94b6a137-07a9-47a7-90bd-af13afc1319f', 'scsi-SQEMU_QEMU_HARDDISK_94b6a137-07a9-47a7-90bd-af13afc1319f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:35:07.689055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-10-09 10:35:07.689072 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:35:07.689082 | orchestrator | 2025-10-09 10:35:07.689092 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-10-09 10:35:07.689102 | orchestrator | Thursday 09 October 2025 10:33:14 +0000 (0:00:00.658) 0:00:18.173 ****** 2025-10-09 10:35:07.689113 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0cbdaba5--e3a8--55ff--9207--33249002ea74-osd--block--0cbdaba5--e3a8--55ff--9207--33249002ea74', 'dm-uuid-LVM-ExvMc93TaGMjWOqGPvd34m2gk1t4oCUOJ6FpMp03P2VtxgLz5RAEn3Dnels013gF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689123 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b8397ec--b473--5fab--a988--270c3fd4ebb0-osd--block--0b8397ec--b473--5fab--a988--270c3fd4ebb0', 'dm-uuid-LVM-Iu2PpFdOBa8teqvWKcbfD2Pd2CSRQtEGYNoKuoXUgYBbYH60uaRabo8PgEzus6ML'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689138 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689148 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689158 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689174 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689190 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689244 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bec6f5a4--3c2e--53c4--9bd6--39a84a6eb9ee-osd--block--bec6f5a4--3c2e--53c4--9bd6--39a84a6eb9ee', 'dm-uuid-LVM-au3ljzSANb0tyOeMGUgRFh2fQv14LQXqLqkTr72pBhnrUNTZZIjkiDu5w36Kbbq4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689259 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689270 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--db411f8a--05b0--54f7--b748--fd517a3c676f-osd--block--db411f8a--05b0--54f7--b748--fd517a3c676f', 'dm-uuid-LVM-jpDQBe8QHm1K0O9IsCStmbdH56NsHtzn6CQJMN93ZeGkA72L0LSc1QtKsLj0mgLC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689280 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689302 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689313 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689324 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689340 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843', 'scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part1', 'scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part14', 'scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part15', 'scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part16', 'scsi-SQEMU_QEMU_HARDDISK_bb365fb9-195f-4b58-855c-59ae3371b843-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689363 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0cbdaba5--e3a8--55ff--9207--33249002ea74-osd--block--0cbdaba5--e3a8--55ff--9207--33249002ea74'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uLdo5O-ec3P-ApYI-bZen-ZZ3F-BeNc-Ki292o', 'scsi-0QEMU_QEMU_HARDDISK_919b2ed4-de3e-4423-bde9-ac7f73558c8d', 'scsi-SQEMU_QEMU_HARDDISK_919b2ed4-de3e-4423-bde9-ac7f73558c8d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689374 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689384 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0b8397ec--b473--5fab--a988--270c3fd4ebb0-osd--block--0b8397ec--b473--5fab--a988--270c3fd4ebb0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lRzkGE-YzPM-XiG6-y68a-feZr-FiG0-MdFMqH', 'scsi-0QEMU_QEMU_HARDDISK_ea7d1eca-dc5e-463e-aff8-492469dc7c84', 'scsi-SQEMU_QEMU_HARDDISK_ea7d1eca-dc5e-463e-aff8-492469dc7c84'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689399 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689409 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689434 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2df43997-ce38-41a3-953f-7189c0799c6e', 'scsi-SQEMU_QEMU_HARDDISK_2df43997-ce38-41a3-953f-7189c0799c6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689445 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689456 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689466 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689480 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689500 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6', 'scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part1', 'scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part14', 'scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part15', 'scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part16', 'scsi-SQEMU_QEMU_HARDDISK_4cf24940-5021-4daf-9cb2-e8be662954e6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689519 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.689532 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bec6f5a4--3c2e--53c4--9bd6--39a84a6eb9ee-osd--block--bec6f5a4--3c2e--53c4--9bd6--39a84a6eb9ee'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-K4lFF5-75vb-ZsLJ-bPXw-JnwN-3ljd-cE9Yz9', 'scsi-0QEMU_QEMU_HARDDISK_9e7febf8-8ec8-4679-b2bb-f3ad59f2c20d', 'scsi-SQEMU_QEMU_HARDDISK_9e7febf8-8ec8-4679-b2bb-f3ad59f2c20d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689548 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--db411f8a--05b0--54f7--b748--fd517a3c676f-osd--block--db411f8a--05b0--54f7--b748--fd517a3c676f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FC0Iqg-a508-XVdl-dcs1-Lm9g-TMVF-jFQuVZ', 'scsi-0QEMU_QEMU_HARDDISK_fd778c69-d4e8-41af-bc93-131a1dca1168', 'scsi-SQEMU_QEMU_HARDDISK_fd778c69-d4e8-41af-bc93-131a1dca1168'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689560 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96a31b72-79c3-475c-a7fa-14d6a4c6c9b3', 'scsi-SQEMU_QEMU_HARDDISK_96a31b72-79c3-475c-a7fa-14d6a4c6c9b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689582 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689594 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:35:07.689606 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--83d577c9--ff1a--5f1d--bd0e--44f99d742f78-osd--block--83d577c9--ff1a--5f1d--bd0e--44f99d742f78', 'dm-uuid-LVM-gxlCTZ5efJTHi74imUaaLMcOdZC9sz722geQT9GSu5DHYJXvaxEnu8fZKsMeh9uX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689618 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ce20a60--fba3--5536--8b48--1e48c039a9b4-osd--block--8ce20a60--fba3--5536--8b48--1e48c039a9b4', 'dm-uuid-LVM-2wowwvZuu9v58jhoRFdOjaRASbwQw8Dt4MH44vTkY6o4LKxALHPQgNRz4cfwq14j'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689633 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689645 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689662 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689679 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689691 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689703 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689714 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689730 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689749 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0', 'scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part1', 'scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part14', 'scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part15', 'scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part16', 'scsi-SQEMU_QEMU_HARDDISK_4e3856c6-3a0d-4403-a9bd-2ba24be42be0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689767 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--83d577c9--ff1a--5f1d--bd0e--44f99d742f78-osd--block--83d577c9--ff1a--5f1d--bd0e--44f99d742f78'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FroQeu-Fzqs-f6jq-MAe8-Csas-tQEp-PNWjna', 'scsi-0QEMU_QEMU_HARDDISK_6ad7b454-0b43-4b47-a404-c2fa6c30a397', 'scsi-SQEMU_QEMU_HARDDISK_6ad7b454-0b43-4b47-a404-c2fa6c30a397'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689783 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8ce20a60--fba3--5536--8b48--1e48c039a9b4-osd--block--8ce20a60--fba3--5536--8b48--1e48c039a9b4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EkQEBW-S5T8-FI78-gRGq-CMjx-7hxO-72EIVH', 'scsi-0QEMU_QEMU_HARDDISK_46e0cf8b-6c4d-4615-bce2-a8b81f113425', 'scsi-SQEMU_QEMU_HARDDISK_46e0cf8b-6c4d-4615-bce2-a8b81f113425'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689799 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_94b6a137-07a9-47a7-90bd-af13afc1319f', 'scsi-SQEMU_QEMU_HARDDISK_94b6a137-07a9-47a7-90bd-af13afc1319f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689816 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-10-09-09-37-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-10-09 10:35:07.689828 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:35:07.689840 | orchestrator | 2025-10-09 10:35:07.689852 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-10-09 10:35:07.689861 | orchestrator | Thursday 09 October 2025 10:33:14 +0000 (0:00:00.684) 0:00:18.857 ****** 2025-10-09 10:35:07.689871 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:35:07.689881 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:35:07.689891 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:35:07.689901 | orchestrator | 2025-10-09 10:35:07.689911 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-10-09 10:35:07.689920 | orchestrator | Thursday 09 October 2025 10:33:15 +0000 (0:00:00.724) 0:00:19.582 ****** 2025-10-09 10:35:07.689930 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:35:07.689940 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:35:07.689949 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:35:07.689959 | orchestrator | 2025-10-09 10:35:07.689969 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-10-09 10:35:07.689979 | orchestrator | Thursday 09 October 2025 10:33:16 +0000 (0:00:00.546) 0:00:20.129 ****** 2025-10-09 10:35:07.689988 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:35:07.689998 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:35:07.690007 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:35:07.690044 | orchestrator | 2025-10-09 10:35:07.690054 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-10-09 10:35:07.690064 | orchestrator | Thursday 09 October 2025 10:33:16 +0000 (0:00:00.644) 0:00:20.774 ****** 2025-10-09 10:35:07.690074 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.690084 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:35:07.690093 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:35:07.690103 | orchestrator | 2025-10-09 10:35:07.690113 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-10-09 10:35:07.690122 | orchestrator | Thursday 09 October 2025 10:33:17 +0000 (0:00:00.301) 0:00:21.075 ****** 2025-10-09 10:35:07.690132 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.690142 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:35:07.690151 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:35:07.690161 | orchestrator | 2025-10-09 10:35:07.690171 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-10-09 10:35:07.690186 | orchestrator | Thursday 09 October 2025 10:33:17 +0000 (0:00:00.429) 0:00:21.505 ****** 2025-10-09 10:35:07.690210 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.690221 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:35:07.690230 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:35:07.690240 | orchestrator | 2025-10-09 10:35:07.690250 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-10-09 10:35:07.690260 | orchestrator | Thursday 09 October 2025 10:33:18 +0000 (0:00:00.536) 0:00:22.041 ****** 2025-10-09 10:35:07.690269 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-10-09 10:35:07.690283 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-10-09 10:35:07.690293 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-10-09 10:35:07.690303 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-10-09 10:35:07.690313 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-10-09 10:35:07.690322 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-10-09 10:35:07.690332 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-10-09 10:35:07.690341 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-10-09 10:35:07.690351 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-10-09 10:35:07.690361 | orchestrator | 2025-10-09 10:35:07.690370 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-10-09 10:35:07.690380 | orchestrator | Thursday 09 October 2025 10:33:19 +0000 (0:00:00.903) 0:00:22.945 ****** 2025-10-09 10:35:07.690390 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-10-09 10:35:07.690399 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-10-09 10:35:07.690409 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-10-09 10:35:07.690419 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.690428 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-10-09 10:35:07.690438 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-10-09 10:35:07.690448 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-10-09 10:35:07.690457 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:35:07.690467 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-10-09 10:35:07.690476 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-10-09 10:35:07.690486 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-10-09 10:35:07.690496 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:35:07.690505 | orchestrator | 2025-10-09 10:35:07.690515 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-10-09 10:35:07.690525 | orchestrator | Thursday 09 October 2025 10:33:19 +0000 (0:00:00.380) 0:00:23.326 ****** 2025-10-09 10:35:07.690534 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:35:07.690544 | orchestrator | 2025-10-09 10:35:07.690554 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-10-09 10:35:07.690565 | orchestrator | Thursday 09 October 2025 10:33:20 +0000 (0:00:00.761) 0:00:24.087 ****** 2025-10-09 10:35:07.690575 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.690584 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:35:07.690594 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:35:07.690603 | orchestrator | 2025-10-09 10:35:07.690618 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-10-09 10:35:07.690629 | orchestrator | Thursday 09 October 2025 10:33:20 +0000 (0:00:00.361) 0:00:24.449 ****** 2025-10-09 10:35:07.690638 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.690648 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:35:07.690657 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:35:07.690667 | orchestrator | 2025-10-09 10:35:07.690683 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-10-09 10:35:07.690693 | orchestrator | Thursday 09 October 2025 10:33:20 +0000 (0:00:00.340) 0:00:24.790 ****** 2025-10-09 10:35:07.690702 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.690712 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:35:07.690722 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:35:07.690731 | orchestrator | 2025-10-09 10:35:07.690741 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-10-09 10:35:07.690751 | orchestrator | Thursday 09 October 2025 10:33:21 +0000 (0:00:00.315) 0:00:25.105 ****** 2025-10-09 10:35:07.690760 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:35:07.690770 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:35:07.690780 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:35:07.690789 | orchestrator | 2025-10-09 10:35:07.690799 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-10-09 10:35:07.690809 | orchestrator | Thursday 09 October 2025 10:33:21 +0000 (0:00:00.664) 0:00:25.770 ****** 2025-10-09 10:35:07.690818 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:35:07.690828 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:35:07.690838 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:35:07.690847 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.690857 | orchestrator | 2025-10-09 10:35:07.690867 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-10-09 10:35:07.690876 | orchestrator | Thursday 09 October 2025 10:33:22 +0000 (0:00:00.406) 0:00:26.177 ****** 2025-10-09 10:35:07.690886 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:35:07.690896 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:35:07.690905 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:35:07.690915 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.690924 | orchestrator | 2025-10-09 10:35:07.690934 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-10-09 10:35:07.690944 | orchestrator | Thursday 09 October 2025 10:33:22 +0000 (0:00:00.394) 0:00:26.571 ****** 2025-10-09 10:35:07.690954 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-10-09 10:35:07.690963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-10-09 10:35:07.690973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-10-09 10:35:07.690983 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.690992 | orchestrator | 2025-10-09 10:35:07.691002 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-10-09 10:35:07.691012 | orchestrator | Thursday 09 October 2025 10:33:23 +0000 (0:00:00.394) 0:00:26.965 ****** 2025-10-09 10:35:07.691021 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:35:07.691035 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:35:07.691045 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:35:07.691054 | orchestrator | 2025-10-09 10:35:07.691064 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-10-09 10:35:07.691074 | orchestrator | Thursday 09 October 2025 10:33:23 +0000 (0:00:00.348) 0:00:27.314 ****** 2025-10-09 10:35:07.691084 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-10-09 10:35:07.691093 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-10-09 10:35:07.691103 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-10-09 10:35:07.691112 | orchestrator | 2025-10-09 10:35:07.691122 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-10-09 10:35:07.691132 | orchestrator | Thursday 09 October 2025 10:33:23 +0000 (0:00:00.528) 0:00:27.843 ****** 2025-10-09 10:35:07.691142 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-10-09 10:35:07.691151 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-09 10:35:07.691161 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-09 10:35:07.691179 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-10-09 10:35:07.691189 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-10-09 10:35:07.691212 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-10-09 10:35:07.691222 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-10-09 10:35:07.691232 | orchestrator | 2025-10-09 10:35:07.691241 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-10-09 10:35:07.691251 | orchestrator | Thursday 09 October 2025 10:33:25 +0000 (0:00:01.054) 0:00:28.897 ****** 2025-10-09 10:35:07.691261 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-10-09 10:35:07.691271 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-10-09 10:35:07.691280 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-10-09 10:35:07.691290 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-10-09 10:35:07.691300 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-10-09 10:35:07.691309 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-10-09 10:35:07.691319 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-10-09 10:35:07.691329 | orchestrator | 2025-10-09 10:35:07.691343 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-10-09 10:35:07.691353 | orchestrator | Thursday 09 October 2025 10:33:27 +0000 (0:00:02.087) 0:00:30.985 ****** 2025-10-09 10:35:07.691363 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:35:07.691372 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:35:07.691382 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-10-09 10:35:07.691391 | orchestrator | 2025-10-09 10:35:07.691401 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-10-09 10:35:07.691411 | orchestrator | Thursday 09 October 2025 10:33:27 +0000 (0:00:00.408) 0:00:31.393 ****** 2025-10-09 10:35:07.691421 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-10-09 10:35:07.691432 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-10-09 10:35:07.691442 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-10-09 10:35:07.691452 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-10-09 10:35:07.691462 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-10-09 10:35:07.691472 | orchestrator | 2025-10-09 10:35:07.691482 | orchestrator | TASK [generate keys] *********************************************************** 2025-10-09 10:35:07.691492 | orchestrator | Thursday 09 October 2025 10:34:12 +0000 (0:00:44.726) 0:01:16.120 ****** 2025-10-09 10:35:07.691507 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:35:07.691521 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:35:07.691531 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:35:07.691541 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:35:07.691551 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:35:07.691561 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:35:07.691570 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-10-09 10:35:07.691580 | orchestrator | 2025-10-09 10:35:07.691589 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-10-09 10:35:07.691599 | orchestrator | Thursday 09 October 2025 10:34:36 +0000 (0:00:24.023) 0:01:40.144 ****** 2025-10-09 10:35:07.691609 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:35:07.691619 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:35:07.691628 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:35:07.691638 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:35:07.691647 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:35:07.691657 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:35:07.691667 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-10-09 10:35:07.691677 | orchestrator | 2025-10-09 10:35:07.691686 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-10-09 10:35:07.691696 | orchestrator | Thursday 09 October 2025 10:34:47 +0000 (0:00:11.562) 0:01:51.707 ****** 2025-10-09 10:35:07.691705 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:35:07.691715 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-09 10:35:07.691725 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-09 10:35:07.691734 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:35:07.691744 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-09 10:35:07.691754 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-09 10:35:07.691768 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:35:07.691778 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-09 10:35:07.691788 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-09 10:35:07.691798 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:35:07.691807 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-09 10:35:07.691817 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-09 10:35:07.691827 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:35:07.691836 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-09 10:35:07.691846 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-09 10:35:07.691855 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-10-09 10:35:07.691865 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-10-09 10:35:07.691875 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-10-09 10:35:07.691889 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-10-09 10:35:07.691899 | orchestrator | 2025-10-09 10:35:07.691909 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:35:07.691919 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-10-09 10:35:07.691930 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-10-09 10:35:07.691940 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-10-09 10:35:07.691950 | orchestrator | 2025-10-09 10:35:07.691959 | orchestrator | 2025-10-09 10:35:07.691969 | orchestrator | 2025-10-09 10:35:07.691978 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:35:07.691988 | orchestrator | Thursday 09 October 2025 10:35:06 +0000 (0:00:18.196) 0:02:09.903 ****** 2025-10-09 10:35:07.691998 | orchestrator | =============================================================================== 2025-10-09 10:35:07.692008 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.73s 2025-10-09 10:35:07.692017 | orchestrator | generate keys ---------------------------------------------------------- 24.02s 2025-10-09 10:35:07.692027 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.20s 2025-10-09 10:35:07.692037 | orchestrator | get keys from monitors ------------------------------------------------- 11.56s 2025-10-09 10:35:07.692050 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.28s 2025-10-09 10:35:07.692060 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.09s 2025-10-09 10:35:07.692070 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.68s 2025-10-09 10:35:07.692080 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.05s 2025-10-09 10:35:07.692089 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.00s 2025-10-09 10:35:07.692099 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.91s 2025-10-09 10:35:07.692109 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.90s 2025-10-09 10:35:07.692118 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.76s 2025-10-09 10:35:07.692128 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.72s 2025-10-09 10:35:07.692138 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.72s 2025-10-09 10:35:07.692147 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.71s 2025-10-09 10:35:07.692157 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.69s 2025-10-09 10:35:07.692167 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.68s 2025-10-09 10:35:07.692176 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.66s 2025-10-09 10:35:07.692186 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.66s 2025-10-09 10:35:07.692207 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2025-10-09 10:35:07.692218 | orchestrator | 2025-10-09 10:35:07 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:35:07.692228 | orchestrator | 2025-10-09 10:35:07 | INFO  | Task 306466c0-3878-4815-a13e-05c0aa7fbfbb is in state STARTED 2025-10-09 10:35:07.692238 | orchestrator | 2025-10-09 10:35:07 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:10.748600 | orchestrator | 2025-10-09 10:35:10 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:35:10.751010 | orchestrator | 2025-10-09 10:35:10 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:35:10.753118 | orchestrator | 2025-10-09 10:35:10 | INFO  | Task 306466c0-3878-4815-a13e-05c0aa7fbfbb is in state STARTED 2025-10-09 10:35:10.753188 | orchestrator | 2025-10-09 10:35:10 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:13.803215 | orchestrator | 2025-10-09 10:35:13 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:35:13.806191 | orchestrator | 2025-10-09 10:35:13 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:35:13.810282 | orchestrator | 2025-10-09 10:35:13 | INFO  | Task 306466c0-3878-4815-a13e-05c0aa7fbfbb is in state STARTED 2025-10-09 10:35:13.811272 | orchestrator | 2025-10-09 10:35:13 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:16.864252 | orchestrator | 2025-10-09 10:35:16 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:35:16.866349 | orchestrator | 2025-10-09 10:35:16 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:35:16.868280 | orchestrator | 2025-10-09 10:35:16 | INFO  | Task 306466c0-3878-4815-a13e-05c0aa7fbfbb is in state STARTED 2025-10-09 10:35:16.868554 | orchestrator | 2025-10-09 10:35:16 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:19.907772 | orchestrator | 2025-10-09 10:35:19 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:35:19.909120 | orchestrator | 2025-10-09 10:35:19 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:35:19.911390 | orchestrator | 2025-10-09 10:35:19 | INFO  | Task 306466c0-3878-4815-a13e-05c0aa7fbfbb is in state STARTED 2025-10-09 10:35:19.911422 | orchestrator | 2025-10-09 10:35:19 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:22.948295 | orchestrator | 2025-10-09 10:35:22 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:35:22.950346 | orchestrator | 2025-10-09 10:35:22 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:35:22.951508 | orchestrator | 2025-10-09 10:35:22 | INFO  | Task 306466c0-3878-4815-a13e-05c0aa7fbfbb is in state STARTED 2025-10-09 10:35:22.951531 | orchestrator | 2025-10-09 10:35:22 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:26.006007 | orchestrator | 2025-10-09 10:35:26 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:35:26.008246 | orchestrator | 2025-10-09 10:35:26 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:35:26.010993 | orchestrator | 2025-10-09 10:35:26 | INFO  | Task 306466c0-3878-4815-a13e-05c0aa7fbfbb is in state STARTED 2025-10-09 10:35:26.011311 | orchestrator | 2025-10-09 10:35:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:29.067999 | orchestrator | 2025-10-09 10:35:29 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:35:29.069462 | orchestrator | 2025-10-09 10:35:29 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:35:29.070733 | orchestrator | 2025-10-09 10:35:29 | INFO  | Task 306466c0-3878-4815-a13e-05c0aa7fbfbb is in state STARTED 2025-10-09 10:35:29.070759 | orchestrator | 2025-10-09 10:35:29 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:32.129848 | orchestrator | 2025-10-09 10:35:32 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:35:32.132433 | orchestrator | 2025-10-09 10:35:32 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:35:32.134466 | orchestrator | 2025-10-09 10:35:32 | INFO  | Task 306466c0-3878-4815-a13e-05c0aa7fbfbb is in state STARTED 2025-10-09 10:35:32.134494 | orchestrator | 2025-10-09 10:35:32 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:35.193603 | orchestrator | 2025-10-09 10:35:35 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:35:35.194675 | orchestrator | 2025-10-09 10:35:35 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:35:35.195768 | orchestrator | 2025-10-09 10:35:35 | INFO  | Task 306466c0-3878-4815-a13e-05c0aa7fbfbb is in state STARTED 2025-10-09 10:35:35.195797 | orchestrator | 2025-10-09 10:35:35 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:38.240770 | orchestrator | 2025-10-09 10:35:38 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:35:38.241658 | orchestrator | 2025-10-09 10:35:38 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:35:38.243887 | orchestrator | 2025-10-09 10:35:38 | INFO  | Task 306466c0-3878-4815-a13e-05c0aa7fbfbb is in state STARTED 2025-10-09 10:35:38.243915 | orchestrator | 2025-10-09 10:35:38 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:41.276866 | orchestrator | 2025-10-09 10:35:41 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:35:41.278414 | orchestrator | 2025-10-09 10:35:41 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:35:41.280077 | orchestrator | 2025-10-09 10:35:41 | INFO  | Task 306466c0-3878-4815-a13e-05c0aa7fbfbb is in state STARTED 2025-10-09 10:35:41.280454 | orchestrator | 2025-10-09 10:35:41 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:44.320910 | orchestrator | 2025-10-09 10:35:44 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:35:44.322833 | orchestrator | 2025-10-09 10:35:44 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:35:44.325263 | orchestrator | 2025-10-09 10:35:44 | INFO  | Task 306466c0-3878-4815-a13e-05c0aa7fbfbb is in state STARTED 2025-10-09 10:35:44.325300 | orchestrator | 2025-10-09 10:35:44 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:47.371328 | orchestrator | 2025-10-09 10:35:47 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:35:47.373611 | orchestrator | 2025-10-09 10:35:47 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:35:47.374924 | orchestrator | 2025-10-09 10:35:47 | INFO  | Task 306466c0-3878-4815-a13e-05c0aa7fbfbb is in state SUCCESS 2025-10-09 10:35:47.374941 | orchestrator | 2025-10-09 10:35:47 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:50.428160 | orchestrator | 2025-10-09 10:35:50 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:35:50.430154 | orchestrator | 2025-10-09 10:35:50 | INFO  | Task 6bd96453-946d-4505-9201-0a411b8ca1b4 is in state STARTED 2025-10-09 10:35:50.431707 | orchestrator | 2025-10-09 10:35:50 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:35:50.431732 | orchestrator | 2025-10-09 10:35:50 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:53.472154 | orchestrator | 2025-10-09 10:35:53 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:35:53.472832 | orchestrator | 2025-10-09 10:35:53 | INFO  | Task 6bd96453-946d-4505-9201-0a411b8ca1b4 is in state STARTED 2025-10-09 10:35:53.473365 | orchestrator | 2025-10-09 10:35:53 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:35:53.473554 | orchestrator | 2025-10-09 10:35:53 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:56.510937 | orchestrator | 2025-10-09 10:35:56 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:35:56.512338 | orchestrator | 2025-10-09 10:35:56 | INFO  | Task 6bd96453-946d-4505-9201-0a411b8ca1b4 is in state STARTED 2025-10-09 10:35:56.513843 | orchestrator | 2025-10-09 10:35:56 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:35:56.513862 | orchestrator | 2025-10-09 10:35:56 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:35:59.554724 | orchestrator | 2025-10-09 10:35:59 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state STARTED 2025-10-09 10:35:59.555936 | orchestrator | 2025-10-09 10:35:59 | INFO  | Task 6bd96453-946d-4505-9201-0a411b8ca1b4 is in state STARTED 2025-10-09 10:35:59.557626 | orchestrator | 2025-10-09 10:35:59 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:35:59.557649 | orchestrator | 2025-10-09 10:35:59 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:02.603351 | orchestrator | 2025-10-09 10:36:02 | INFO  | Task e66238e1-61c4-4d44-8f9b-eb27c3e4b41b is in state SUCCESS 2025-10-09 10:36:02.605404 | orchestrator | 2025-10-09 10:36:02.605445 | orchestrator | 2025-10-09 10:36:02.605458 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-10-09 10:36:02.605471 | orchestrator | 2025-10-09 10:36:02.605482 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2025-10-09 10:36:02.605494 | orchestrator | Thursday 09 October 2025 10:35:12 +0000 (0:00:00.167) 0:00:00.167 ****** 2025-10-09 10:36:02.605578 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-10-09 10:36:02.605594 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-10-09 10:36:02.605606 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-10-09 10:36:02.605616 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-10-09 10:36:02.605627 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-10-09 10:36:02.605874 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-10-09 10:36:02.605890 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-10-09 10:36:02.605900 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-10-09 10:36:02.605911 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-10-09 10:36:02.605923 | orchestrator | 2025-10-09 10:36:02.605934 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-10-09 10:36:02.605945 | orchestrator | Thursday 09 October 2025 10:35:17 +0000 (0:00:04.947) 0:00:05.114 ****** 2025-10-09 10:36:02.605956 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-10-09 10:36:02.605967 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-10-09 10:36:02.605977 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-10-09 10:36:02.605988 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-10-09 10:36:02.605999 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-10-09 10:36:02.606010 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-10-09 10:36:02.606102 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-10-09 10:36:02.606114 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-10-09 10:36:02.606125 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-10-09 10:36:02.606136 | orchestrator | 2025-10-09 10:36:02.606147 | orchestrator | TASK [Create share directory] ************************************************** 2025-10-09 10:36:02.606158 | orchestrator | Thursday 09 October 2025 10:35:21 +0000 (0:00:04.112) 0:00:09.226 ****** 2025-10-09 10:36:02.606170 | orchestrator | changed: [testbed-manager -> localhost] 2025-10-09 10:36:02.606182 | orchestrator | 2025-10-09 10:36:02.606218 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-10-09 10:36:02.606229 | orchestrator | Thursday 09 October 2025 10:35:22 +0000 (0:00:01.052) 0:00:10.278 ****** 2025-10-09 10:36:02.606240 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-10-09 10:36:02.606252 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-10-09 10:36:02.606276 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-10-09 10:36:02.606288 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-10-09 10:36:02.606299 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-10-09 10:36:02.606310 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-10-09 10:36:02.606321 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-10-09 10:36:02.606332 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-10-09 10:36:02.606343 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-10-09 10:36:02.606354 | orchestrator | 2025-10-09 10:36:02.606365 | orchestrator | TASK [Check if target directories exist] *************************************** 2025-10-09 10:36:02.606376 | orchestrator | Thursday 09 October 2025 10:35:36 +0000 (0:00:14.570) 0:00:24.849 ****** 2025-10-09 10:36:02.606387 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2025-10-09 10:36:02.606398 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2025-10-09 10:36:02.606410 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-10-09 10:36:02.606421 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-10-09 10:36:02.606445 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-10-09 10:36:02.606456 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-10-09 10:36:02.606468 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2025-10-09 10:36:02.606478 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2025-10-09 10:36:02.606489 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2025-10-09 10:36:02.606500 | orchestrator | 2025-10-09 10:36:02.606511 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-10-09 10:36:02.606522 | orchestrator | Thursday 09 October 2025 10:35:39 +0000 (0:00:03.151) 0:00:28.000 ****** 2025-10-09 10:36:02.606534 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-10-09 10:36:02.606545 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-10-09 10:36:02.606556 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-10-09 10:36:02.606574 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-10-09 10:36:02.606586 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-10-09 10:36:02.606597 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-10-09 10:36:02.606608 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-10-09 10:36:02.606619 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-10-09 10:36:02.606629 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-10-09 10:36:02.606640 | orchestrator | 2025-10-09 10:36:02.606651 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:36:02.606663 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:36:02.606675 | orchestrator | 2025-10-09 10:36:02.606686 | orchestrator | 2025-10-09 10:36:02.606697 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:36:02.606708 | orchestrator | Thursday 09 October 2025 10:35:46 +0000 (0:00:06.361) 0:00:34.362 ****** 2025-10-09 10:36:02.606719 | orchestrator | =============================================================================== 2025-10-09 10:36:02.606730 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.57s 2025-10-09 10:36:02.606741 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.36s 2025-10-09 10:36:02.606752 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.95s 2025-10-09 10:36:02.606763 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.11s 2025-10-09 10:36:02.606774 | orchestrator | Check if target directories exist --------------------------------------- 3.15s 2025-10-09 10:36:02.606785 | orchestrator | Create share directory -------------------------------------------------- 1.05s 2025-10-09 10:36:02.606796 | orchestrator | 2025-10-09 10:36:02.606807 | orchestrator | 2025-10-09 10:36:02.606817 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:36:02.606828 | orchestrator | 2025-10-09 10:36:02.606919 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:36:02.606932 | orchestrator | Thursday 09 October 2025 10:34:14 +0000 (0:00:00.297) 0:00:00.297 ****** 2025-10-09 10:36:02.606943 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:36:02.606954 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:36:02.606965 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:36:02.606976 | orchestrator | 2025-10-09 10:36:02.606987 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:36:02.606997 | orchestrator | Thursday 09 October 2025 10:34:14 +0000 (0:00:00.323) 0:00:00.621 ****** 2025-10-09 10:36:02.607015 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-10-09 10:36:02.607027 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-10-09 10:36:02.607038 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-10-09 10:36:02.607049 | orchestrator | 2025-10-09 10:36:02.607060 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-10-09 10:36:02.607071 | orchestrator | 2025-10-09 10:36:02.607082 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-10-09 10:36:02.607169 | orchestrator | Thursday 09 October 2025 10:34:15 +0000 (0:00:00.476) 0:00:01.097 ****** 2025-10-09 10:36:02.607182 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:36:02.607211 | orchestrator | 2025-10-09 10:36:02.607222 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-10-09 10:36:02.607233 | orchestrator | Thursday 09 October 2025 10:34:15 +0000 (0:00:00.573) 0:00:01.671 ****** 2025-10-09 10:36:02.607264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:36:02.607298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:36:02.607330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:36:02.607343 | orchestrator | 2025-10-09 10:36:02.607355 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-10-09 10:36:02.607366 | orchestrator | Thursday 09 October 2025 10:34:16 +0000 (0:00:01.184) 0:00:02.856 ****** 2025-10-09 10:36:02.607378 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:36:02.607389 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:36:02.607400 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:36:02.607411 | orchestrator | 2025-10-09 10:36:02.607422 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-10-09 10:36:02.607433 | orchestrator | Thursday 09 October 2025 10:34:17 +0000 (0:00:00.453) 0:00:03.309 ****** 2025-10-09 10:36:02.607443 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-10-09 10:36:02.607454 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-10-09 10:36:02.607465 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-10-09 10:36:02.607476 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-10-09 10:36:02.607487 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-10-09 10:36:02.607498 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-10-09 10:36:02.607509 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-10-09 10:36:02.607525 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-10-09 10:36:02.607536 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-10-09 10:36:02.607547 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-10-09 10:36:02.607566 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-10-09 10:36:02.607577 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-10-09 10:36:02.607588 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-10-09 10:36:02.607599 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-10-09 10:36:02.607610 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-10-09 10:36:02.607621 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-10-09 10:36:02.607632 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-10-09 10:36:02.607643 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-10-09 10:36:02.607654 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-10-09 10:36:02.607664 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-10-09 10:36:02.607675 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-10-09 10:36:02.607686 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-10-09 10:36:02.607704 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-10-09 10:36:02.607715 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-10-09 10:36:02.607727 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-10-09 10:36:02.607740 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-10-09 10:36:02.607751 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-10-09 10:36:02.607762 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-10-09 10:36:02.607773 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-10-09 10:36:02.607784 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-10-09 10:36:02.607795 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-10-09 10:36:02.607808 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-10-09 10:36:02.607821 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-10-09 10:36:02.607834 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-10-09 10:36:02.607846 | orchestrator | 2025-10-09 10:36:02.607859 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-09 10:36:02.607872 | orchestrator | Thursday 09 October 2025 10:34:18 +0000 (0:00:00.814) 0:00:04.124 ****** 2025-10-09 10:36:02.607885 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:36:02.607898 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:36:02.607910 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:36:02.607922 | orchestrator | 2025-10-09 10:36:02.607935 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-09 10:36:02.607954 | orchestrator | Thursday 09 October 2025 10:34:18 +0000 (0:00:00.313) 0:00:04.438 ****** 2025-10-09 10:36:02.607967 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.607980 | orchestrator | 2025-10-09 10:36:02.607993 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-09 10:36:02.608006 | orchestrator | Thursday 09 October 2025 10:34:18 +0000 (0:00:00.132) 0:00:04.570 ****** 2025-10-09 10:36:02.608018 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.608031 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:36:02.608044 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:36:02.608057 | orchestrator | 2025-10-09 10:36:02.608070 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-09 10:36:02.608082 | orchestrator | Thursday 09 October 2025 10:34:19 +0000 (0:00:00.533) 0:00:05.104 ****** 2025-10-09 10:36:02.608095 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:36:02.608107 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:36:02.608120 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:36:02.608133 | orchestrator | 2025-10-09 10:36:02.608150 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-09 10:36:02.608162 | orchestrator | Thursday 09 October 2025 10:34:19 +0000 (0:00:00.399) 0:00:05.504 ****** 2025-10-09 10:36:02.608173 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.608184 | orchestrator | 2025-10-09 10:36:02.608213 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-09 10:36:02.608224 | orchestrator | Thursday 09 October 2025 10:34:19 +0000 (0:00:00.127) 0:00:05.631 ****** 2025-10-09 10:36:02.608235 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.608246 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:36:02.608257 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:36:02.608268 | orchestrator | 2025-10-09 10:36:02.608279 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-09 10:36:02.608290 | orchestrator | Thursday 09 October 2025 10:34:19 +0000 (0:00:00.293) 0:00:05.924 ****** 2025-10-09 10:36:02.608301 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:36:02.608312 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:36:02.608323 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:36:02.608334 | orchestrator | 2025-10-09 10:36:02.608345 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-09 10:36:02.608356 | orchestrator | Thursday 09 October 2025 10:34:20 +0000 (0:00:00.388) 0:00:06.312 ****** 2025-10-09 10:36:02.608366 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.608377 | orchestrator | 2025-10-09 10:36:02.608388 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-09 10:36:02.608399 | orchestrator | Thursday 09 October 2025 10:34:20 +0000 (0:00:00.156) 0:00:06.469 ****** 2025-10-09 10:36:02.608410 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.608421 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:36:02.608432 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:36:02.608443 | orchestrator | 2025-10-09 10:36:02.608454 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-09 10:36:02.608470 | orchestrator | Thursday 09 October 2025 10:34:20 +0000 (0:00:00.518) 0:00:06.987 ****** 2025-10-09 10:36:02.608482 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:36:02.608493 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:36:02.608504 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:36:02.608515 | orchestrator | 2025-10-09 10:36:02.608526 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-09 10:36:02.608537 | orchestrator | Thursday 09 October 2025 10:34:21 +0000 (0:00:00.323) 0:00:07.311 ****** 2025-10-09 10:36:02.608548 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.608559 | orchestrator | 2025-10-09 10:36:02.608570 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-09 10:36:02.608581 | orchestrator | Thursday 09 October 2025 10:34:21 +0000 (0:00:00.139) 0:00:07.450 ****** 2025-10-09 10:36:02.608592 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.608609 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:36:02.608620 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:36:02.608631 | orchestrator | 2025-10-09 10:36:02.608642 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-09 10:36:02.608653 | orchestrator | Thursday 09 October 2025 10:34:21 +0000 (0:00:00.329) 0:00:07.780 ****** 2025-10-09 10:36:02.608664 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:36:02.608675 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:36:02.608686 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:36:02.608697 | orchestrator | 2025-10-09 10:36:02.608708 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-09 10:36:02.608719 | orchestrator | Thursday 09 October 2025 10:34:22 +0000 (0:00:00.559) 0:00:08.339 ****** 2025-10-09 10:36:02.608730 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.608741 | orchestrator | 2025-10-09 10:36:02.608752 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-09 10:36:02.608763 | orchestrator | Thursday 09 October 2025 10:34:22 +0000 (0:00:00.167) 0:00:08.507 ****** 2025-10-09 10:36:02.608774 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.608785 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:36:02.608796 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:36:02.608806 | orchestrator | 2025-10-09 10:36:02.608817 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-09 10:36:02.608828 | orchestrator | Thursday 09 October 2025 10:34:22 +0000 (0:00:00.303) 0:00:08.811 ****** 2025-10-09 10:36:02.608839 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:36:02.608850 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:36:02.608861 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:36:02.608872 | orchestrator | 2025-10-09 10:36:02.608883 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-09 10:36:02.608894 | orchestrator | Thursday 09 October 2025 10:34:23 +0000 (0:00:00.312) 0:00:09.123 ****** 2025-10-09 10:36:02.608905 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.608916 | orchestrator | 2025-10-09 10:36:02.608927 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-09 10:36:02.608938 | orchestrator | Thursday 09 October 2025 10:34:23 +0000 (0:00:00.141) 0:00:09.265 ****** 2025-10-09 10:36:02.608949 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.608960 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:36:02.608971 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:36:02.608982 | orchestrator | 2025-10-09 10:36:02.608993 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-09 10:36:02.609004 | orchestrator | Thursday 09 October 2025 10:34:23 +0000 (0:00:00.287) 0:00:09.552 ****** 2025-10-09 10:36:02.609015 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:36:02.609026 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:36:02.609037 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:36:02.609047 | orchestrator | 2025-10-09 10:36:02.609059 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-09 10:36:02.609069 | orchestrator | Thursday 09 October 2025 10:34:23 +0000 (0:00:00.529) 0:00:10.082 ****** 2025-10-09 10:36:02.609080 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.609091 | orchestrator | 2025-10-09 10:36:02.609102 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-09 10:36:02.609113 | orchestrator | Thursday 09 October 2025 10:34:24 +0000 (0:00:00.133) 0:00:10.215 ****** 2025-10-09 10:36:02.609124 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.609135 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:36:02.609151 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:36:02.609162 | orchestrator | 2025-10-09 10:36:02.609173 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-09 10:36:02.609184 | orchestrator | Thursday 09 October 2025 10:34:24 +0000 (0:00:00.366) 0:00:10.582 ****** 2025-10-09 10:36:02.609227 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:36:02.609239 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:36:02.609256 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:36:02.609267 | orchestrator | 2025-10-09 10:36:02.609278 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-09 10:36:02.609289 | orchestrator | Thursday 09 October 2025 10:34:24 +0000 (0:00:00.316) 0:00:10.898 ****** 2025-10-09 10:36:02.609300 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.609311 | orchestrator | 2025-10-09 10:36:02.609322 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-09 10:36:02.609333 | orchestrator | Thursday 09 October 2025 10:34:24 +0000 (0:00:00.142) 0:00:11.040 ****** 2025-10-09 10:36:02.609343 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.609354 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:36:02.609365 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:36:02.609376 | orchestrator | 2025-10-09 10:36:02.609387 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-09 10:36:02.609398 | orchestrator | Thursday 09 October 2025 10:34:25 +0000 (0:00:00.345) 0:00:11.386 ****** 2025-10-09 10:36:02.609409 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:36:02.609419 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:36:02.609430 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:36:02.609441 | orchestrator | 2025-10-09 10:36:02.609452 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-09 10:36:02.609463 | orchestrator | Thursday 09 October 2025 10:34:25 +0000 (0:00:00.573) 0:00:11.959 ****** 2025-10-09 10:36:02.609474 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.609485 | orchestrator | 2025-10-09 10:36:02.609501 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-09 10:36:02.609513 | orchestrator | Thursday 09 October 2025 10:34:26 +0000 (0:00:00.125) 0:00:12.085 ****** 2025-10-09 10:36:02.609524 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.609535 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:36:02.609546 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:36:02.609556 | orchestrator | 2025-10-09 10:36:02.609567 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-10-09 10:36:02.609578 | orchestrator | Thursday 09 October 2025 10:34:26 +0000 (0:00:00.344) 0:00:12.429 ****** 2025-10-09 10:36:02.609589 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:36:02.609600 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:36:02.609611 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:36:02.609622 | orchestrator | 2025-10-09 10:36:02.609633 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-10-09 10:36:02.609644 | orchestrator | Thursday 09 October 2025 10:34:26 +0000 (0:00:00.314) 0:00:12.743 ****** 2025-10-09 10:36:02.609655 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.609666 | orchestrator | 2025-10-09 10:36:02.609677 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-10-09 10:36:02.609688 | orchestrator | Thursday 09 October 2025 10:34:26 +0000 (0:00:00.126) 0:00:12.870 ****** 2025-10-09 10:36:02.609699 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.609710 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:36:02.609720 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:36:02.609731 | orchestrator | 2025-10-09 10:36:02.609742 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-10-09 10:36:02.609753 | orchestrator | Thursday 09 October 2025 10:34:27 +0000 (0:00:00.505) 0:00:13.375 ****** 2025-10-09 10:36:02.609764 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:36:02.609775 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:36:02.609786 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:36:02.609796 | orchestrator | 2025-10-09 10:36:02.609807 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-10-09 10:36:02.609818 | orchestrator | Thursday 09 October 2025 10:34:29 +0000 (0:00:01.773) 0:00:15.149 ****** 2025-10-09 10:36:02.609829 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-10-09 10:36:02.609852 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-10-09 10:36:02.609863 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-10-09 10:36:02.609874 | orchestrator | 2025-10-09 10:36:02.609885 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-10-09 10:36:02.609896 | orchestrator | Thursday 09 October 2025 10:34:30 +0000 (0:00:01.801) 0:00:16.951 ****** 2025-10-09 10:36:02.609907 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-10-09 10:36:02.609918 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-10-09 10:36:02.609929 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-10-09 10:36:02.609940 | orchestrator | 2025-10-09 10:36:02.609951 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-10-09 10:36:02.609962 | orchestrator | Thursday 09 October 2025 10:34:33 +0000 (0:00:02.259) 0:00:19.211 ****** 2025-10-09 10:36:02.609972 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-10-09 10:36:02.609983 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-10-09 10:36:02.609994 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-10-09 10:36:02.610005 | orchestrator | 2025-10-09 10:36:02.610045 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-10-09 10:36:02.610064 | orchestrator | Thursday 09 October 2025 10:34:35 +0000 (0:00:02.171) 0:00:21.382 ****** 2025-10-09 10:36:02.610075 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.610086 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:36:02.610097 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:36:02.610108 | orchestrator | 2025-10-09 10:36:02.610119 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-10-09 10:36:02.610129 | orchestrator | Thursday 09 October 2025 10:34:35 +0000 (0:00:00.344) 0:00:21.726 ****** 2025-10-09 10:36:02.610140 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.610151 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:36:02.610162 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:36:02.610173 | orchestrator | 2025-10-09 10:36:02.610183 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-10-09 10:36:02.610211 | orchestrator | Thursday 09 October 2025 10:34:35 +0000 (0:00:00.300) 0:00:22.027 ****** 2025-10-09 10:36:02.610222 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:36:02.610234 | orchestrator | 2025-10-09 10:36:02.610244 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-10-09 10:36:02.610255 | orchestrator | Thursday 09 October 2025 10:34:36 +0000 (0:00:00.830) 0:00:22.857 ****** 2025-10-09 10:36:02.610278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:36:02.610307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:36:02.610331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:36:02.610353 | orchestrator | 2025-10-09 10:36:02.610365 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-10-09 10:36:02.610376 | orchestrator | Thursday 09 October 2025 10:34:38 +0000 (0:00:01.561) 0:00:24.419 ****** 2025-10-09 10:36:02.610400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-09 10:36:02.610414 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.610426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-09 10:36:02.610445 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:36:02.610472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-09 10:36:02.610491 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:36:02.610502 | orchestrator | 2025-10-09 10:36:02.610513 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-10-09 10:36:02.610524 | orchestrator | Thursday 09 October 2025 10:34:38 +0000 (0:00:00.661) 0:00:25.081 ****** 2025-10-09 10:36:02.610536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-09 10:36:02.610553 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.610573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-09 10:36:02.610592 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:36:02.610609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-10-09 10:36:02.610621 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:36:02.610632 | orchestrator | 2025-10-09 10:36:02.610643 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-10-09 10:36:02.610654 | orchestrator | Thursday 09 October 2025 10:34:39 +0000 (0:00:00.832) 0:00:25.913 ****** 2025-10-09 10:36:02.610674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:36:02.610699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:36:02.610720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-10-09 10:36:02.610744 | orchestrator | 2025-10-09 10:36:02.610755 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-10-09 10:36:02.610767 | orchestrator | Thursday 09 October 2025 10:34:41 +0000 (0:00:01.685) 0:00:27.598 ****** 2025-10-09 10:36:02.610778 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:36:02.610789 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:36:02.610800 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:36:02.610811 | orchestrator | 2025-10-09 10:36:02.610822 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-10-09 10:36:02.610833 | orchestrator | Thursday 09 October 2025 10:34:41 +0000 (0:00:00.315) 0:00:27.914 ****** 2025-10-09 10:36:02.610844 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:36:02.610855 | orchestrator | 2025-10-09 10:36:02.610866 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-10-09 10:36:02.610877 | orchestrator | Thursday 09 October 2025 10:34:42 +0000 (0:00:00.559) 0:00:28.474 ****** 2025-10-09 10:36:02.610888 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:36:02.610899 | orchestrator | 2025-10-09 10:36:02.610910 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-10-09 10:36:02.610920 | orchestrator | Thursday 09 October 2025 10:34:44 +0000 (0:00:02.431) 0:00:30.905 ****** 2025-10-09 10:36:02.610931 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:36:02.610942 | orchestrator | 2025-10-09 10:36:02.610953 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-10-09 10:36:02.610964 | orchestrator | Thursday 09 October 2025 10:34:47 +0000 (0:00:02.692) 0:00:33.598 ****** 2025-10-09 10:36:02.610975 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:36:02.610986 | orchestrator | 2025-10-09 10:36:02.611005 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-10-09 10:36:02.611016 | orchestrator | Thursday 09 October 2025 10:35:04 +0000 (0:00:16.588) 0:00:50.186 ****** 2025-10-09 10:36:02.611027 | orchestrator | 2025-10-09 10:36:02.611038 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-10-09 10:36:02.611049 | orchestrator | Thursday 09 October 2025 10:35:04 +0000 (0:00:00.065) 0:00:50.251 ****** 2025-10-09 10:36:02.611067 | orchestrator | 2025-10-09 10:36:02.611078 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-10-09 10:36:02.611089 | orchestrator | Thursday 09 October 2025 10:35:04 +0000 (0:00:00.072) 0:00:50.324 ****** 2025-10-09 10:36:02.611100 | orchestrator | 2025-10-09 10:36:02.611110 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-10-09 10:36:02.611122 | orchestrator | Thursday 09 October 2025 10:35:04 +0000 (0:00:00.074) 0:00:50.399 ****** 2025-10-09 10:36:02.611132 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:36:02.611143 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:36:02.611154 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:36:02.611165 | orchestrator | 2025-10-09 10:36:02.611176 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:36:02.611203 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-10-09 10:36:02.611215 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-10-09 10:36:02.611227 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-10-09 10:36:02.611238 | orchestrator | 2025-10-09 10:36:02.611249 | orchestrator | 2025-10-09 10:36:02.611266 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:36:02.611277 | orchestrator | Thursday 09 October 2025 10:36:01 +0000 (0:00:57.478) 0:01:47.878 ****** 2025-10-09 10:36:02.611288 | orchestrator | =============================================================================== 2025-10-09 10:36:02.611299 | orchestrator | horizon : Restart horizon container ------------------------------------ 57.48s 2025-10-09 10:36:02.611310 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.59s 2025-10-09 10:36:02.611321 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.69s 2025-10-09 10:36:02.611332 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.43s 2025-10-09 10:36:02.611343 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.26s 2025-10-09 10:36:02.611354 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.17s 2025-10-09 10:36:02.611365 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.80s 2025-10-09 10:36:02.611376 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.77s 2025-10-09 10:36:02.611387 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.69s 2025-10-09 10:36:02.611398 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.56s 2025-10-09 10:36:02.611409 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.19s 2025-10-09 10:36:02.611420 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.83s 2025-10-09 10:36:02.611430 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.83s 2025-10-09 10:36:02.611441 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.81s 2025-10-09 10:36:02.611452 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.66s 2025-10-09 10:36:02.611463 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2025-10-09 10:36:02.611474 | orchestrator | horizon : Update policy file name --------------------------------------- 0.57s 2025-10-09 10:36:02.611485 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2025-10-09 10:36:02.611496 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2025-10-09 10:36:02.611507 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.53s 2025-10-09 10:36:02.611518 | orchestrator | 2025-10-09 10:36:02 | INFO  | Task 6bd96453-946d-4505-9201-0a411b8ca1b4 is in state STARTED 2025-10-09 10:36:02.611535 | orchestrator | 2025-10-09 10:36:02 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:36:02.611547 | orchestrator | 2025-10-09 10:36:02 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:05.653765 | orchestrator | 2025-10-09 10:36:05 | INFO  | Task 6bd96453-946d-4505-9201-0a411b8ca1b4 is in state STARTED 2025-10-09 10:36:05.656150 | orchestrator | 2025-10-09 10:36:05 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:36:05.656178 | orchestrator | 2025-10-09 10:36:05 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:08.692646 | orchestrator | 2025-10-09 10:36:08 | INFO  | Task 6bd96453-946d-4505-9201-0a411b8ca1b4 is in state STARTED 2025-10-09 10:36:08.693464 | orchestrator | 2025-10-09 10:36:08 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:36:08.693496 | orchestrator | 2025-10-09 10:36:08 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:11.744898 | orchestrator | 2025-10-09 10:36:11 | INFO  | Task 6bd96453-946d-4505-9201-0a411b8ca1b4 is in state STARTED 2025-10-09 10:36:11.747247 | orchestrator | 2025-10-09 10:36:11 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:36:11.747399 | orchestrator | 2025-10-09 10:36:11 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:14.790920 | orchestrator | 2025-10-09 10:36:14 | INFO  | Task 6bd96453-946d-4505-9201-0a411b8ca1b4 is in state STARTED 2025-10-09 10:36:14.791649 | orchestrator | 2025-10-09 10:36:14 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:36:14.791683 | orchestrator | 2025-10-09 10:36:14 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:17.837075 | orchestrator | 2025-10-09 10:36:17 | INFO  | Task 6bd96453-946d-4505-9201-0a411b8ca1b4 is in state STARTED 2025-10-09 10:36:17.839550 | orchestrator | 2025-10-09 10:36:17 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:36:17.840094 | orchestrator | 2025-10-09 10:36:17 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:20.888483 | orchestrator | 2025-10-09 10:36:20 | INFO  | Task 6bd96453-946d-4505-9201-0a411b8ca1b4 is in state STARTED 2025-10-09 10:36:20.891665 | orchestrator | 2025-10-09 10:36:20 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:36:20.891701 | orchestrator | 2025-10-09 10:36:20 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:23.933815 | orchestrator | 2025-10-09 10:36:23 | INFO  | Task 6bd96453-946d-4505-9201-0a411b8ca1b4 is in state STARTED 2025-10-09 10:36:23.936725 | orchestrator | 2025-10-09 10:36:23 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:36:23.937125 | orchestrator | 2025-10-09 10:36:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:26.989531 | orchestrator | 2025-10-09 10:36:26 | INFO  | Task 6bd96453-946d-4505-9201-0a411b8ca1b4 is in state STARTED 2025-10-09 10:36:26.991159 | orchestrator | 2025-10-09 10:36:26 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:36:26.991288 | orchestrator | 2025-10-09 10:36:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:30.037369 | orchestrator | 2025-10-09 10:36:30 | INFO  | Task 6bd96453-946d-4505-9201-0a411b8ca1b4 is in state STARTED 2025-10-09 10:36:30.039143 | orchestrator | 2025-10-09 10:36:30 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:36:30.039181 | orchestrator | 2025-10-09 10:36:30 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:33.080986 | orchestrator | 2025-10-09 10:36:33 | INFO  | Task 6bd96453-946d-4505-9201-0a411b8ca1b4 is in state STARTED 2025-10-09 10:36:33.081403 | orchestrator | 2025-10-09 10:36:33 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:36:33.081433 | orchestrator | 2025-10-09 10:36:33 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:36.112641 | orchestrator | 2025-10-09 10:36:36 | INFO  | Task 6bd96453-946d-4505-9201-0a411b8ca1b4 is in state STARTED 2025-10-09 10:36:36.113822 | orchestrator | 2025-10-09 10:36:36 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:36:36.114404 | orchestrator | 2025-10-09 10:36:36 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:39.161779 | orchestrator | 2025-10-09 10:36:39 | INFO  | Task 6bd96453-946d-4505-9201-0a411b8ca1b4 is in state STARTED 2025-10-09 10:36:39.162659 | orchestrator | 2025-10-09 10:36:39 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:36:39.162687 | orchestrator | 2025-10-09 10:36:39 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:42.205442 | orchestrator | 2025-10-09 10:36:42 | INFO  | Task 6bd96453-946d-4505-9201-0a411b8ca1b4 is in state STARTED 2025-10-09 10:36:42.205924 | orchestrator | 2025-10-09 10:36:42 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:36:42.205959 | orchestrator | 2025-10-09 10:36:42 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:45.260716 | orchestrator | 2025-10-09 10:36:45 | INFO  | Task 6bd96453-946d-4505-9201-0a411b8ca1b4 is in state STARTED 2025-10-09 10:36:45.263751 | orchestrator | 2025-10-09 10:36:45 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:36:45.264370 | orchestrator | 2025-10-09 10:36:45 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:48.325031 | orchestrator | 2025-10-09 10:36:48 | INFO  | Task d39a2609-cbf5-417a-9f3d-26f1170b8c9c is in state STARTED 2025-10-09 10:36:48.326852 | orchestrator | 2025-10-09 10:36:48 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:36:48.328463 | orchestrator | 2025-10-09 10:36:48 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:36:48.330576 | orchestrator | 2025-10-09 10:36:48 | INFO  | Task 6bd96453-946d-4505-9201-0a411b8ca1b4 is in state SUCCESS 2025-10-09 10:36:48.332540 | orchestrator | 2025-10-09 10:36:48 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:36:48.332559 | orchestrator | 2025-10-09 10:36:48 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:51.371955 | orchestrator | 2025-10-09 10:36:51 | INFO  | Task d39a2609-cbf5-417a-9f3d-26f1170b8c9c is in state STARTED 2025-10-09 10:36:51.372056 | orchestrator | 2025-10-09 10:36:51 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:36:51.372071 | orchestrator | 2025-10-09 10:36:51 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:36:51.372640 | orchestrator | 2025-10-09 10:36:51 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:36:51.372662 | orchestrator | 2025-10-09 10:36:51 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:54.412306 | orchestrator | 2025-10-09 10:36:54 | INFO  | Task d39a2609-cbf5-417a-9f3d-26f1170b8c9c is in state SUCCESS 2025-10-09 10:36:54.413312 | orchestrator | 2025-10-09 10:36:54 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:36:54.415671 | orchestrator | 2025-10-09 10:36:54 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:36:54.416650 | orchestrator | 2025-10-09 10:36:54 | INFO  | Task 84702a7d-d492-43d5-9a44-9b57f808b53c is in state STARTED 2025-10-09 10:36:54.418883 | orchestrator | 2025-10-09 10:36:54 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:36:54.420085 | orchestrator | 2025-10-09 10:36:54 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:36:54.420263 | orchestrator | 2025-10-09 10:36:54 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:36:57.465855 | orchestrator | 2025-10-09 10:36:57 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:36:57.468309 | orchestrator | 2025-10-09 10:36:57 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:36:57.468343 | orchestrator | 2025-10-09 10:36:57 | INFO  | Task 84702a7d-d492-43d5-9a44-9b57f808b53c is in state STARTED 2025-10-09 10:36:57.468357 | orchestrator | 2025-10-09 10:36:57 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:36:57.469581 | orchestrator | 2025-10-09 10:36:57 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:36:57.469605 | orchestrator | 2025-10-09 10:36:57 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:00.507084 | orchestrator | 2025-10-09 10:37:00 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:37:00.507472 | orchestrator | 2025-10-09 10:37:00 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:37:00.508073 | orchestrator | 2025-10-09 10:37:00 | INFO  | Task 84702a7d-d492-43d5-9a44-9b57f808b53c is in state STARTED 2025-10-09 10:37:00.510360 | orchestrator | 2025-10-09 10:37:00 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:37:00.511911 | orchestrator | 2025-10-09 10:37:00 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:37:00.511935 | orchestrator | 2025-10-09 10:37:00 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:03.555769 | orchestrator | 2025-10-09 10:37:03 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:37:03.556162 | orchestrator | 2025-10-09 10:37:03 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:37:03.557983 | orchestrator | 2025-10-09 10:37:03 | INFO  | Task 84702a7d-d492-43d5-9a44-9b57f808b53c is in state STARTED 2025-10-09 10:37:03.559782 | orchestrator | 2025-10-09 10:37:03 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state STARTED 2025-10-09 10:37:03.560627 | orchestrator | 2025-10-09 10:37:03 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:37:03.560798 | orchestrator | 2025-10-09 10:37:03 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:06.599556 | orchestrator | 2025-10-09 10:37:06 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:37:06.599816 | orchestrator | 2025-10-09 10:37:06 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:37:06.601166 | orchestrator | 2025-10-09 10:37:06 | INFO  | Task 84702a7d-d492-43d5-9a44-9b57f808b53c is in state STARTED 2025-10-09 10:37:06.603682 | orchestrator | 2025-10-09 10:37:06 | INFO  | Task 5ebdd63c-c29b-441a-a2fb-46df10535d5c is in state SUCCESS 2025-10-09 10:37:06.605260 | orchestrator | 2025-10-09 10:37:06.605346 | orchestrator | 2025-10-09 10:37:06.605449 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-10-09 10:37:06.605465 | orchestrator | 2025-10-09 10:37:06.605477 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-10-09 10:37:06.605978 | orchestrator | Thursday 09 October 2025 10:35:50 +0000 (0:00:00.228) 0:00:00.228 ****** 2025-10-09 10:37:06.605998 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-10-09 10:37:06.606012 | orchestrator | 2025-10-09 10:37:06.606071 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-10-09 10:37:06.606083 | orchestrator | Thursday 09 October 2025 10:35:50 +0000 (0:00:00.211) 0:00:00.440 ****** 2025-10-09 10:37:06.606094 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-10-09 10:37:06.606106 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-10-09 10:37:06.606118 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-10-09 10:37:06.606129 | orchestrator | 2025-10-09 10:37:06.606140 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-10-09 10:37:06.606151 | orchestrator | Thursday 09 October 2025 10:35:51 +0000 (0:00:01.142) 0:00:01.583 ****** 2025-10-09 10:37:06.606163 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-10-09 10:37:06.606174 | orchestrator | 2025-10-09 10:37:06.606207 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-10-09 10:37:06.606219 | orchestrator | Thursday 09 October 2025 10:35:53 +0000 (0:00:01.509) 0:00:03.092 ****** 2025-10-09 10:37:06.606231 | orchestrator | changed: [testbed-manager] 2025-10-09 10:37:06.606242 | orchestrator | 2025-10-09 10:37:06.606253 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-10-09 10:37:06.606264 | orchestrator | Thursday 09 October 2025 10:35:54 +0000 (0:00:00.949) 0:00:04.042 ****** 2025-10-09 10:37:06.606275 | orchestrator | changed: [testbed-manager] 2025-10-09 10:37:06.606286 | orchestrator | 2025-10-09 10:37:06.606297 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-10-09 10:37:06.606308 | orchestrator | Thursday 09 October 2025 10:35:55 +0000 (0:00:00.958) 0:00:05.000 ****** 2025-10-09 10:37:06.606319 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-10-09 10:37:06.606330 | orchestrator | ok: [testbed-manager] 2025-10-09 10:37:06.606341 | orchestrator | 2025-10-09 10:37:06.606352 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-10-09 10:37:06.606363 | orchestrator | Thursday 09 October 2025 10:36:35 +0000 (0:00:40.690) 0:00:45.690 ****** 2025-10-09 10:37:06.606375 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-10-09 10:37:06.606386 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-10-09 10:37:06.606398 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-10-09 10:37:06.606409 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-10-09 10:37:06.606420 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-10-09 10:37:06.606431 | orchestrator | 2025-10-09 10:37:06.606442 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-10-09 10:37:06.606453 | orchestrator | Thursday 09 October 2025 10:36:40 +0000 (0:00:04.325) 0:00:50.016 ****** 2025-10-09 10:37:06.606464 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-10-09 10:37:06.606475 | orchestrator | 2025-10-09 10:37:06.606486 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-10-09 10:37:06.606497 | orchestrator | Thursday 09 October 2025 10:36:40 +0000 (0:00:00.491) 0:00:50.508 ****** 2025-10-09 10:37:06.606508 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:37:06.606519 | orchestrator | 2025-10-09 10:37:06.606530 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-10-09 10:37:06.606541 | orchestrator | Thursday 09 October 2025 10:36:40 +0000 (0:00:00.150) 0:00:50.659 ****** 2025-10-09 10:37:06.606552 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:37:06.606563 | orchestrator | 2025-10-09 10:37:06.606576 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-10-09 10:37:06.606599 | orchestrator | Thursday 09 October 2025 10:36:41 +0000 (0:00:00.522) 0:00:51.181 ****** 2025-10-09 10:37:06.606612 | orchestrator | changed: [testbed-manager] 2025-10-09 10:37:06.606625 | orchestrator | 2025-10-09 10:37:06.606637 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-10-09 10:37:06.606649 | orchestrator | Thursday 09 October 2025 10:36:43 +0000 (0:00:01.654) 0:00:52.836 ****** 2025-10-09 10:37:06.606662 | orchestrator | changed: [testbed-manager] 2025-10-09 10:37:06.606674 | orchestrator | 2025-10-09 10:37:06.606686 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-10-09 10:37:06.606698 | orchestrator | Thursday 09 October 2025 10:36:43 +0000 (0:00:00.838) 0:00:53.674 ****** 2025-10-09 10:37:06.606710 | orchestrator | changed: [testbed-manager] 2025-10-09 10:37:06.606723 | orchestrator | 2025-10-09 10:37:06.606736 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-10-09 10:37:06.606748 | orchestrator | Thursday 09 October 2025 10:36:44 +0000 (0:00:00.687) 0:00:54.362 ****** 2025-10-09 10:37:06.606775 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-10-09 10:37:06.606788 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-10-09 10:37:06.606800 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-10-09 10:37:06.606813 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-10-09 10:37:06.606824 | orchestrator | 2025-10-09 10:37:06.606837 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:37:06.606850 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:37:06.606864 | orchestrator | 2025-10-09 10:37:06.606876 | orchestrator | 2025-10-09 10:37:06.606932 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:37:06.606946 | orchestrator | Thursday 09 October 2025 10:36:46 +0000 (0:00:01.488) 0:00:55.850 ****** 2025-10-09 10:37:06.606957 | orchestrator | =============================================================================== 2025-10-09 10:37:06.606968 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.69s 2025-10-09 10:37:06.606979 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.33s 2025-10-09 10:37:06.606990 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.65s 2025-10-09 10:37:06.607000 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.51s 2025-10-09 10:37:06.607011 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.49s 2025-10-09 10:37:06.607022 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.14s 2025-10-09 10:37:06.607033 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.96s 2025-10-09 10:37:06.607044 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.95s 2025-10-09 10:37:06.607055 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.84s 2025-10-09 10:37:06.607066 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.69s 2025-10-09 10:37:06.607077 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.52s 2025-10-09 10:37:06.607088 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.49s 2025-10-09 10:37:06.607099 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2025-10-09 10:37:06.607109 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2025-10-09 10:37:06.607120 | orchestrator | 2025-10-09 10:37:06.607131 | orchestrator | 2025-10-09 10:37:06.607142 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:37:06.607153 | orchestrator | 2025-10-09 10:37:06.607164 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:37:06.607175 | orchestrator | Thursday 09 October 2025 10:36:50 +0000 (0:00:00.185) 0:00:00.185 ****** 2025-10-09 10:37:06.607214 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:37:06.607226 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:37:06.607236 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:37:06.607247 | orchestrator | 2025-10-09 10:37:06.607258 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:37:06.607269 | orchestrator | Thursday 09 October 2025 10:36:51 +0000 (0:00:00.322) 0:00:00.507 ****** 2025-10-09 10:37:06.607280 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-10-09 10:37:06.607291 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-10-09 10:37:06.607302 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-10-09 10:37:06.607313 | orchestrator | 2025-10-09 10:37:06.607324 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-10-09 10:37:06.607335 | orchestrator | 2025-10-09 10:37:06.607346 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-10-09 10:37:06.607356 | orchestrator | Thursday 09 October 2025 10:36:51 +0000 (0:00:00.740) 0:00:01.247 ****** 2025-10-09 10:37:06.607367 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:37:06.607378 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:37:06.607389 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:37:06.607400 | orchestrator | 2025-10-09 10:37:06.607410 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:37:06.607423 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:37:06.607434 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:37:06.607446 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:37:06.607456 | orchestrator | 2025-10-09 10:37:06.607467 | orchestrator | 2025-10-09 10:37:06.607478 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:37:06.607489 | orchestrator | Thursday 09 October 2025 10:36:52 +0000 (0:00:00.723) 0:00:01.972 ****** 2025-10-09 10:37:06.607500 | orchestrator | =============================================================================== 2025-10-09 10:37:06.607510 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2025-10-09 10:37:06.607521 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.72s 2025-10-09 10:37:06.607532 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-10-09 10:37:06.607543 | orchestrator | 2025-10-09 10:37:06.607553 | orchestrator | 2025-10-09 10:37:06.607564 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:37:06.607575 | orchestrator | 2025-10-09 10:37:06.607586 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:37:06.607596 | orchestrator | Thursday 09 October 2025 10:34:14 +0000 (0:00:00.286) 0:00:00.286 ****** 2025-10-09 10:37:06.607607 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:37:06.607623 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:37:06.607635 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:37:06.607645 | orchestrator | 2025-10-09 10:37:06.607656 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:37:06.607667 | orchestrator | Thursday 09 October 2025 10:34:14 +0000 (0:00:00.336) 0:00:00.623 ****** 2025-10-09 10:37:06.607678 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-10-09 10:37:06.607689 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-10-09 10:37:06.607700 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-10-09 10:37:06.607710 | orchestrator | 2025-10-09 10:37:06.607721 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-10-09 10:37:06.607732 | orchestrator | 2025-10-09 10:37:06.607773 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-10-09 10:37:06.607787 | orchestrator | Thursday 09 October 2025 10:34:14 +0000 (0:00:00.486) 0:00:01.109 ****** 2025-10-09 10:37:06.607804 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:37:06.607815 | orchestrator | 2025-10-09 10:37:06.607826 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-10-09 10:37:06.607837 | orchestrator | Thursday 09 October 2025 10:34:15 +0000 (0:00:00.613) 0:00:01.723 ****** 2025-10-09 10:37:06.607853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:37:06.607870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:37:06.607884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:37:06.607903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:37:06.607955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:37:06.607969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:37:06.607981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:37:06.607992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:37:06.608003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:37:06.608015 | orchestrator | 2025-10-09 10:37:06.608026 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-10-09 10:37:06.608037 | orchestrator | Thursday 09 October 2025 10:34:17 +0000 (0:00:01.917) 0:00:03.640 ****** 2025-10-09 10:37:06.608048 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-10-09 10:37:06.608059 | orchestrator | 2025-10-09 10:37:06.608070 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-10-09 10:37:06.608081 | orchestrator | Thursday 09 October 2025 10:34:18 +0000 (0:00:00.922) 0:00:04.562 ****** 2025-10-09 10:37:06.608092 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:37:06.608103 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:37:06.608120 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:37:06.608131 | orchestrator | 2025-10-09 10:37:06.608147 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-10-09 10:37:06.608158 | orchestrator | Thursday 09 October 2025 10:34:19 +0000 (0:00:00.584) 0:00:05.147 ****** 2025-10-09 10:37:06.608169 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:37:06.608180 | orchestrator | 2025-10-09 10:37:06.608222 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-10-09 10:37:06.608234 | orchestrator | Thursday 09 October 2025 10:34:19 +0000 (0:00:00.712) 0:00:05.859 ****** 2025-10-09 10:37:06.608245 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:37:06.608256 | orchestrator | 2025-10-09 10:37:06.608273 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-10-09 10:37:06.608284 | orchestrator | Thursday 09 October 2025 10:34:20 +0000 (0:00:00.585) 0:00:06.445 ****** 2025-10-09 10:37:06.608297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:37:06.608309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:37:06.608322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:37:06.608347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:37:06.608370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:37:06.608382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:37:06.608394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:37:06.608406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:37:06.608417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:37:06.608428 | orchestrator | 2025-10-09 10:37:06.608439 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-10-09 10:37:06.608451 | orchestrator | Thursday 09 October 2025 10:34:23 +0000 (0:00:03.462) 0:00:09.908 ****** 2025-10-09 10:37:06.608481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-09 10:37:06.608501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:37:06.608513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:37:06.608525 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:37:06.608537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-09 10:37:06.608549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:37:06.608567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:37:06.608578 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:37:06.608602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-09 10:37:06.608614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:37:06.608626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:37:06.608637 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:37:06.608649 | orchestrator | 2025-10-09 10:37:06.608660 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-10-09 10:37:06.608671 | orchestrator | Thursday 09 October 2025 10:34:24 +0000 (0:00:00.928) 0:00:10.836 ****** 2025-10-09 10:37:06.608683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-09 10:37:06.608702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:37:06.608719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:37:06.608731 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:37:06.608750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-09 10:37:06.608763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:37:06.608775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:37:06.608786 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:37:06.608804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-10-09 10:37:06.608820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:37:06.608838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-10-09 10:37:06.608850 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:37:06.608861 | orchestrator | 2025-10-09 10:37:06.608872 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-10-09 10:37:06.608883 | orchestrator | Thursday 09 October 2025 10:34:25 +0000 (0:00:00.807) 0:00:11.644 ****** 2025-10-09 10:37:06.608894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:37:06.608907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:37:06.608926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:37:06.608948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:37:06.608961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:37:06.608973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:37:06.608984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:37:06.609002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:37:06.609013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:37:06.609024 | orchestrator | 2025-10-09 10:37:06.609036 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-10-09 10:37:06.609047 | orchestrator | Thursday 09 October 2025 10:34:28 +0000 (0:00:03.377) 0:00:15.021 ****** 2025-10-09 10:37:06.609069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:37:06.609082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:37:06.609094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:37:06.609112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:37:06.609124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:37:06.609141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:37:06.609160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:37:06.609171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:37:06.609201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:37:06.609220 | orchestrator | 2025-10-09 10:37:06.609231 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-10-09 10:37:06.609242 | orchestrator | Thursday 09 October 2025 10:34:34 +0000 (0:00:05.404) 0:00:20.426 ****** 2025-10-09 10:37:06.609253 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:37:06.609264 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:37:06.609275 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:37:06.609286 | orchestrator | 2025-10-09 10:37:06.609297 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-10-09 10:37:06.609307 | orchestrator | Thursday 09 October 2025 10:34:35 +0000 (0:00:01.501) 0:00:21.927 ****** 2025-10-09 10:37:06.609318 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:37:06.609329 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:37:06.609340 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:37:06.609350 | orchestrator | 2025-10-09 10:37:06.609361 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-10-09 10:37:06.609372 | orchestrator | Thursday 09 October 2025 10:34:36 +0000 (0:00:00.547) 0:00:22.475 ****** 2025-10-09 10:37:06.609383 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:37:06.609393 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:37:06.609404 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:37:06.609415 | orchestrator | 2025-10-09 10:37:06.609426 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-10-09 10:37:06.609436 | orchestrator | Thursday 09 October 2025 10:34:36 +0000 (0:00:00.303) 0:00:22.778 ****** 2025-10-09 10:37:06.609447 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:37:06.609458 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:37:06.609469 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:37:06.609479 | orchestrator | 2025-10-09 10:37:06.609490 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-10-09 10:37:06.609501 | orchestrator | Thursday 09 October 2025 10:34:37 +0000 (0:00:00.501) 0:00:23.280 ****** 2025-10-09 10:37:06.609521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:37:06.609541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:37:06.609560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:37:06.609572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:37:06.609584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:37:06.609601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-10-09 10:37:06.609620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:37:06.609639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:37:06.609650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:37:06.609662 | orchestrator | 2025-10-09 10:37:06.609673 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-10-09 10:37:06.609684 | orchestrator | Thursday 09 October 2025 10:34:39 +0000 (0:00:02.437) 0:00:25.717 ****** 2025-10-09 10:37:06.609695 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:37:06.609706 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:37:06.609716 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:37:06.609727 | orchestrator | 2025-10-09 10:37:06.609738 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-10-09 10:37:06.609749 | orchestrator | Thursday 09 October 2025 10:34:39 +0000 (0:00:00.302) 0:00:26.020 ****** 2025-10-09 10:37:06.609760 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-10-09 10:37:06.609771 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-10-09 10:37:06.609782 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-10-09 10:37:06.609792 | orchestrator | 2025-10-09 10:37:06.609803 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-10-09 10:37:06.609814 | orchestrator | Thursday 09 October 2025 10:34:41 +0000 (0:00:01.968) 0:00:27.988 ****** 2025-10-09 10:37:06.609825 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:37:06.609835 | orchestrator | 2025-10-09 10:37:06.609846 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-10-09 10:37:06.609857 | orchestrator | Thursday 09 October 2025 10:34:42 +0000 (0:00:01.065) 0:00:29.054 ****** 2025-10-09 10:37:06.609868 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:37:06.609878 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:37:06.609889 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:37:06.609900 | orchestrator | 2025-10-09 10:37:06.609910 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-10-09 10:37:06.609921 | orchestrator | Thursday 09 October 2025 10:34:43 +0000 (0:00:00.814) 0:00:29.868 ****** 2025-10-09 10:37:06.609932 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:37:06.609943 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-10-09 10:37:06.609953 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-10-09 10:37:06.609964 | orchestrator | 2025-10-09 10:37:06.609975 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-10-09 10:37:06.609986 | orchestrator | Thursday 09 October 2025 10:34:44 +0000 (0:00:01.086) 0:00:30.955 ****** 2025-10-09 10:37:06.609997 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:37:06.610007 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:37:06.610047 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:37:06.610061 | orchestrator | 2025-10-09 10:37:06.610082 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-10-09 10:37:06.610093 | orchestrator | Thursday 09 October 2025 10:34:45 +0000 (0:00:00.346) 0:00:31.301 ****** 2025-10-09 10:37:06.610109 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-10-09 10:37:06.610120 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-10-09 10:37:06.610131 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-10-09 10:37:06.610142 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-10-09 10:37:06.610153 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-10-09 10:37:06.610170 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-10-09 10:37:06.610199 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-10-09 10:37:06.610211 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-10-09 10:37:06.610222 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-10-09 10:37:06.610232 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-10-09 10:37:06.610243 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-10-09 10:37:06.610254 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-10-09 10:37:06.610265 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-10-09 10:37:06.610276 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-10-09 10:37:06.610287 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-10-09 10:37:06.610298 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-10-09 10:37:06.610309 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-10-09 10:37:06.610320 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-10-09 10:37:06.610331 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-10-09 10:37:06.610342 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-10-09 10:37:06.610353 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-10-09 10:37:06.610364 | orchestrator | 2025-10-09 10:37:06.610375 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-10-09 10:37:06.610385 | orchestrator | Thursday 09 October 2025 10:34:54 +0000 (0:00:09.246) 0:00:40.547 ****** 2025-10-09 10:37:06.610396 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-10-09 10:37:06.610407 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-10-09 10:37:06.610418 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-10-09 10:37:06.610429 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-10-09 10:37:06.610440 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-10-09 10:37:06.610451 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-10-09 10:37:06.610462 | orchestrator | 2025-10-09 10:37:06.610473 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-10-09 10:37:06.610484 | orchestrator | Thursday 09 October 2025 10:34:57 +0000 (0:00:02.942) 0:00:43.490 ****** 2025-10-09 10:37:06.610502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:37:06.610528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:37:06.610541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-10-09 10:37:06.610553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:37:06.610565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:37:06.610584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-10-09 10:37:06.610601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:37:06.610619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:37:06.610631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-10-09 10:37:06.610642 | orchestrator | 2025-10-09 10:37:06.610653 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-10-09 10:37:06.610664 | orchestrator | Thursday 09 October 2025 10:34:59 +0000 (0:00:02.396) 0:00:45.887 ****** 2025-10-09 10:37:06.610675 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:37:06.610687 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:37:06.610698 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:37:06.610709 | orchestrator | 2025-10-09 10:37:06.610720 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-10-09 10:37:06.610731 | orchestrator | Thursday 09 October 2025 10:35:00 +0000 (0:00:00.328) 0:00:46.215 ****** 2025-10-09 10:37:06.610742 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:37:06.610753 | orchestrator | 2025-10-09 10:37:06.610764 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-10-09 10:37:06.610775 | orchestrator | Thursday 09 October 2025 10:35:02 +0000 (0:00:02.476) 0:00:48.691 ****** 2025-10-09 10:37:06.610786 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:37:06.610797 | orchestrator | 2025-10-09 10:37:06.610808 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-10-09 10:37:06.610826 | orchestrator | Thursday 09 October 2025 10:35:04 +0000 (0:00:02.334) 0:00:51.026 ****** 2025-10-09 10:37:06.610837 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:37:06.610848 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:37:06.610859 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:37:06.610870 | orchestrator | 2025-10-09 10:37:06.610881 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-10-09 10:37:06.610892 | orchestrator | Thursday 09 October 2025 10:35:06 +0000 (0:00:01.636) 0:00:52.662 ****** 2025-10-09 10:37:06.610903 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:37:06.610914 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:37:06.610925 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:37:06.610936 | orchestrator | 2025-10-09 10:37:06.610947 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-10-09 10:37:06.610958 | orchestrator | Thursday 09 October 2025 10:35:07 +0000 (0:00:00.521) 0:00:53.183 ****** 2025-10-09 10:37:06.610969 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:37:06.610980 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:37:06.610991 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:37:06.611002 | orchestrator | 2025-10-09 10:37:06.611013 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-10-09 10:37:06.611024 | orchestrator | Thursday 09 October 2025 10:35:07 +0000 (0:00:00.379) 0:00:53.563 ****** 2025-10-09 10:37:06.611035 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:37:06.611046 | orchestrator | 2025-10-09 10:37:06.611057 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-10-09 10:37:06.611067 | orchestrator | Thursday 09 October 2025 10:35:21 +0000 (0:00:14.240) 0:01:07.803 ****** 2025-10-09 10:37:06.611078 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:37:06.611089 | orchestrator | 2025-10-09 10:37:06.611100 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-10-09 10:37:06.611111 | orchestrator | Thursday 09 October 2025 10:35:31 +0000 (0:00:09.544) 0:01:17.348 ****** 2025-10-09 10:37:06.611122 | orchestrator | 2025-10-09 10:37:06.611133 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-10-09 10:37:06.611145 | orchestrator | Thursday 09 October 2025 10:35:31 +0000 (0:00:00.075) 0:01:17.424 ****** 2025-10-09 10:37:06.611155 | orchestrator | 2025-10-09 10:37:06.611166 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-10-09 10:37:06.611177 | orchestrator | Thursday 09 October 2025 10:35:31 +0000 (0:00:00.063) 0:01:17.487 ****** 2025-10-09 10:37:06.611204 | orchestrator | 2025-10-09 10:37:06.611215 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-10-09 10:37:06.611226 | orchestrator | Thursday 09 October 2025 10:35:31 +0000 (0:00:00.081) 0:01:17.569 ****** 2025-10-09 10:37:06.611236 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:37:06.611247 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:37:06.611258 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:37:06.611269 | orchestrator | 2025-10-09 10:37:06.611284 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-10-09 10:37:06.611296 | orchestrator | Thursday 09 October 2025 10:35:57 +0000 (0:00:25.865) 0:01:43.435 ****** 2025-10-09 10:37:06.611307 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:37:06.611317 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:37:06.611328 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:37:06.611339 | orchestrator | 2025-10-09 10:37:06.611350 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-10-09 10:37:06.611361 | orchestrator | Thursday 09 October 2025 10:36:07 +0000 (0:00:10.053) 0:01:53.488 ****** 2025-10-09 10:37:06.611372 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:37:06.611383 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:37:06.611399 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:37:06.611410 | orchestrator | 2025-10-09 10:37:06.611421 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-10-09 10:37:06.611439 | orchestrator | Thursday 09 October 2025 10:36:15 +0000 (0:00:07.743) 0:02:01.232 ****** 2025-10-09 10:37:06.611450 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:37:06.611461 | orchestrator | 2025-10-09 10:37:06.611472 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-10-09 10:37:06.611483 | orchestrator | Thursday 09 October 2025 10:36:15 +0000 (0:00:00.838) 0:02:02.070 ****** 2025-10-09 10:37:06.611494 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:37:06.611505 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:37:06.611516 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:37:06.611527 | orchestrator | 2025-10-09 10:37:06.611538 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-10-09 10:37:06.611549 | orchestrator | Thursday 09 October 2025 10:36:16 +0000 (0:00:00.816) 0:02:02.887 ****** 2025-10-09 10:37:06.611560 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:37:06.611571 | orchestrator | 2025-10-09 10:37:06.611582 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-10-09 10:37:06.611593 | orchestrator | Thursday 09 October 2025 10:36:18 +0000 (0:00:01.800) 0:02:04.688 ****** 2025-10-09 10:37:06.611604 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-10-09 10:37:06.611615 | orchestrator | 2025-10-09 10:37:06.611626 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-10-09 10:37:06.611637 | orchestrator | Thursday 09 October 2025 10:36:30 +0000 (0:00:11.607) 0:02:16.296 ****** 2025-10-09 10:37:06.611648 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-10-09 10:37:06.611659 | orchestrator | 2025-10-09 10:37:06.611670 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-10-09 10:37:06.611681 | orchestrator | Thursday 09 October 2025 10:36:53 +0000 (0:00:22.898) 0:02:39.194 ****** 2025-10-09 10:37:06.611692 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-10-09 10:37:06.611702 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-10-09 10:37:06.611713 | orchestrator | 2025-10-09 10:37:06.611724 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-10-09 10:37:06.611736 | orchestrator | Thursday 09 October 2025 10:36:59 +0000 (0:00:06.818) 0:02:46.013 ****** 2025-10-09 10:37:06.611747 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:37:06.611758 | orchestrator | 2025-10-09 10:37:06.611769 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-10-09 10:37:06.611780 | orchestrator | Thursday 09 October 2025 10:37:00 +0000 (0:00:00.210) 0:02:46.223 ****** 2025-10-09 10:37:06.611791 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:37:06.611802 | orchestrator | 2025-10-09 10:37:06.611812 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-10-09 10:37:06.611823 | orchestrator | Thursday 09 October 2025 10:37:00 +0000 (0:00:00.205) 0:02:46.429 ****** 2025-10-09 10:37:06.611834 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:37:06.611845 | orchestrator | 2025-10-09 10:37:06.611856 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-10-09 10:37:06.611867 | orchestrator | Thursday 09 October 2025 10:37:00 +0000 (0:00:00.247) 0:02:46.676 ****** 2025-10-09 10:37:06.611878 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:37:06.611889 | orchestrator | 2025-10-09 10:37:06.611900 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-10-09 10:37:06.611911 | orchestrator | Thursday 09 October 2025 10:37:01 +0000 (0:00:01.197) 0:02:47.874 ****** 2025-10-09 10:37:06.611922 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:37:06.611933 | orchestrator | 2025-10-09 10:37:06.611944 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-10-09 10:37:06.611954 | orchestrator | Thursday 09 October 2025 10:37:05 +0000 (0:00:03.268) 0:02:51.142 ****** 2025-10-09 10:37:06.611965 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:37:06.611983 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:37:06.611994 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:37:06.612005 | orchestrator | 2025-10-09 10:37:06.612016 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:37:06.612027 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-10-09 10:37:06.612039 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-10-09 10:37:06.612050 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-10-09 10:37:06.612061 | orchestrator | 2025-10-09 10:37:06.612071 | orchestrator | 2025-10-09 10:37:06.612082 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:37:06.612098 | orchestrator | Thursday 09 October 2025 10:37:05 +0000 (0:00:00.589) 0:02:51.732 ****** 2025-10-09 10:37:06.612109 | orchestrator | =============================================================================== 2025-10-09 10:37:06.612120 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 25.87s 2025-10-09 10:37:06.612131 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.90s 2025-10-09 10:37:06.612142 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.24s 2025-10-09 10:37:06.612153 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.61s 2025-10-09 10:37:06.612164 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.05s 2025-10-09 10:37:06.612232 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.54s 2025-10-09 10:37:06.612246 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.25s 2025-10-09 10:37:06.612257 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.74s 2025-10-09 10:37:06.612268 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.82s 2025-10-09 10:37:06.612279 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.40s 2025-10-09 10:37:06.612290 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.46s 2025-10-09 10:37:06.612301 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.38s 2025-10-09 10:37:06.612312 | orchestrator | keystone : Creating default user role ----------------------------------- 3.27s 2025-10-09 10:37:06.612323 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.94s 2025-10-09 10:37:06.612334 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.48s 2025-10-09 10:37:06.612345 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.44s 2025-10-09 10:37:06.612356 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.40s 2025-10-09 10:37:06.612367 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.33s 2025-10-09 10:37:06.612378 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.97s 2025-10-09 10:37:06.612389 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.92s 2025-10-09 10:37:06.612400 | orchestrator | 2025-10-09 10:37:06 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:37:06.612411 | orchestrator | 2025-10-09 10:37:06 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:09.646849 | orchestrator | 2025-10-09 10:37:09 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:37:09.647446 | orchestrator | 2025-10-09 10:37:09 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:37:09.649609 | orchestrator | 2025-10-09 10:37:09 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:37:09.651926 | orchestrator | 2025-10-09 10:37:09 | INFO  | Task 84702a7d-d492-43d5-9a44-9b57f808b53c is in state STARTED 2025-10-09 10:37:09.652809 | orchestrator | 2025-10-09 10:37:09 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:37:09.652876 | orchestrator | 2025-10-09 10:37:09 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:12.693749 | orchestrator | 2025-10-09 10:37:12 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:37:12.693861 | orchestrator | 2025-10-09 10:37:12 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:37:12.693876 | orchestrator | 2025-10-09 10:37:12 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:37:12.693888 | orchestrator | 2025-10-09 10:37:12 | INFO  | Task 84702a7d-d492-43d5-9a44-9b57f808b53c is in state STARTED 2025-10-09 10:37:12.693900 | orchestrator | 2025-10-09 10:37:12 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:37:12.693913 | orchestrator | 2025-10-09 10:37:12 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:15.725594 | orchestrator | 2025-10-09 10:37:15 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:37:15.726926 | orchestrator | 2025-10-09 10:37:15 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:37:15.728345 | orchestrator | 2025-10-09 10:37:15 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:37:15.730962 | orchestrator | 2025-10-09 10:37:15 | INFO  | Task 84702a7d-d492-43d5-9a44-9b57f808b53c is in state STARTED 2025-10-09 10:37:15.730990 | orchestrator | 2025-10-09 10:37:15 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:37:15.731002 | orchestrator | 2025-10-09 10:37:15 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:18.789457 | orchestrator | 2025-10-09 10:37:18 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:37:18.791320 | orchestrator | 2025-10-09 10:37:18 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:37:18.794595 | orchestrator | 2025-10-09 10:37:18 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:37:18.795707 | orchestrator | 2025-10-09 10:37:18 | INFO  | Task 84702a7d-d492-43d5-9a44-9b57f808b53c is in state STARTED 2025-10-09 10:37:18.797709 | orchestrator | 2025-10-09 10:37:18 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:37:18.797809 | orchestrator | 2025-10-09 10:37:18 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:21.831881 | orchestrator | 2025-10-09 10:37:21 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:37:21.833250 | orchestrator | 2025-10-09 10:37:21 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:37:21.834453 | orchestrator | 2025-10-09 10:37:21 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:37:21.835394 | orchestrator | 2025-10-09 10:37:21 | INFO  | Task 84702a7d-d492-43d5-9a44-9b57f808b53c is in state STARTED 2025-10-09 10:37:21.836890 | orchestrator | 2025-10-09 10:37:21 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:37:21.836913 | orchestrator | 2025-10-09 10:37:21 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:24.901688 | orchestrator | 2025-10-09 10:37:24 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:37:24.901808 | orchestrator | 2025-10-09 10:37:24 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:37:24.901821 | orchestrator | 2025-10-09 10:37:24 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:37:24.901831 | orchestrator | 2025-10-09 10:37:24 | INFO  | Task 84702a7d-d492-43d5-9a44-9b57f808b53c is in state STARTED 2025-10-09 10:37:24.901840 | orchestrator | 2025-10-09 10:37:24 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:37:24.901850 | orchestrator | 2025-10-09 10:37:24 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:28.072066 | orchestrator | 2025-10-09 10:37:28 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:37:28.072172 | orchestrator | 2025-10-09 10:37:28 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:37:28.072826 | orchestrator | 2025-10-09 10:37:28 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:37:28.073493 | orchestrator | 2025-10-09 10:37:28 | INFO  | Task 84702a7d-d492-43d5-9a44-9b57f808b53c is in state STARTED 2025-10-09 10:37:28.075955 | orchestrator | 2025-10-09 10:37:28 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:37:28.075979 | orchestrator | 2025-10-09 10:37:28 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:31.096687 | orchestrator | 2025-10-09 10:37:31 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:37:31.096786 | orchestrator | 2025-10-09 10:37:31 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:37:31.097349 | orchestrator | 2025-10-09 10:37:31 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:37:31.097930 | orchestrator | 2025-10-09 10:37:31 | INFO  | Task 84702a7d-d492-43d5-9a44-9b57f808b53c is in state STARTED 2025-10-09 10:37:31.102345 | orchestrator | 2025-10-09 10:37:31 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:37:31.102371 | orchestrator | 2025-10-09 10:37:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:34.125647 | orchestrator | 2025-10-09 10:37:34 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:37:34.125745 | orchestrator | 2025-10-09 10:37:34 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:37:34.125758 | orchestrator | 2025-10-09 10:37:34 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:37:34.125771 | orchestrator | 2025-10-09 10:37:34 | INFO  | Task 84702a7d-d492-43d5-9a44-9b57f808b53c is in state SUCCESS 2025-10-09 10:37:34.126681 | orchestrator | 2025-10-09 10:37:34 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:37:34.126706 | orchestrator | 2025-10-09 10:37:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:37.184872 | orchestrator | 2025-10-09 10:37:37 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:37:37.184963 | orchestrator | 2025-10-09 10:37:37 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:37:37.184976 | orchestrator | 2025-10-09 10:37:37 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:37:37.184987 | orchestrator | 2025-10-09 10:37:37 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:37:37.184998 | orchestrator | 2025-10-09 10:37:37 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:37:37.185036 | orchestrator | 2025-10-09 10:37:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:40.202781 | orchestrator | 2025-10-09 10:37:40 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:37:40.202877 | orchestrator | 2025-10-09 10:37:40 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:37:40.203445 | orchestrator | 2025-10-09 10:37:40 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:37:40.217990 | orchestrator | 2025-10-09 10:37:40 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:37:40.221040 | orchestrator | 2025-10-09 10:37:40 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:37:40.221070 | orchestrator | 2025-10-09 10:37:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:43.256736 | orchestrator | 2025-10-09 10:37:43 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:37:43.259963 | orchestrator | 2025-10-09 10:37:43 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:37:43.261079 | orchestrator | 2025-10-09 10:37:43 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:37:43.261973 | orchestrator | 2025-10-09 10:37:43 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:37:43.263144 | orchestrator | 2025-10-09 10:37:43 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:37:43.263403 | orchestrator | 2025-10-09 10:37:43 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:46.307886 | orchestrator | 2025-10-09 10:37:46 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:37:46.307982 | orchestrator | 2025-10-09 10:37:46 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:37:46.308447 | orchestrator | 2025-10-09 10:37:46 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:37:46.309156 | orchestrator | 2025-10-09 10:37:46 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:37:46.310080 | orchestrator | 2025-10-09 10:37:46 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:37:46.310104 | orchestrator | 2025-10-09 10:37:46 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:49.340501 | orchestrator | 2025-10-09 10:37:49 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:37:49.341461 | orchestrator | 2025-10-09 10:37:49 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:37:49.341960 | orchestrator | 2025-10-09 10:37:49 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:37:49.342866 | orchestrator | 2025-10-09 10:37:49 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:37:49.343645 | orchestrator | 2025-10-09 10:37:49 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:37:49.343663 | orchestrator | 2025-10-09 10:37:49 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:52.381428 | orchestrator | 2025-10-09 10:37:52 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:37:52.383153 | orchestrator | 2025-10-09 10:37:52 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:37:52.383790 | orchestrator | 2025-10-09 10:37:52 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:37:52.384708 | orchestrator | 2025-10-09 10:37:52 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:37:52.385454 | orchestrator | 2025-10-09 10:37:52 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:37:52.385489 | orchestrator | 2025-10-09 10:37:52 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:55.416868 | orchestrator | 2025-10-09 10:37:55 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:37:55.416963 | orchestrator | 2025-10-09 10:37:55 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:37:55.417432 | orchestrator | 2025-10-09 10:37:55 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:37:55.417887 | orchestrator | 2025-10-09 10:37:55 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:37:55.418464 | orchestrator | 2025-10-09 10:37:55 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:37:55.418487 | orchestrator | 2025-10-09 10:37:55 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:37:58.440990 | orchestrator | 2025-10-09 10:37:58 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:37:58.441203 | orchestrator | 2025-10-09 10:37:58 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:37:58.441926 | orchestrator | 2025-10-09 10:37:58 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:37:58.442705 | orchestrator | 2025-10-09 10:37:58 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:37:58.443354 | orchestrator | 2025-10-09 10:37:58 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:37:58.443476 | orchestrator | 2025-10-09 10:37:58 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:01.469964 | orchestrator | 2025-10-09 10:38:01 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:38:01.470109 | orchestrator | 2025-10-09 10:38:01 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:38:01.470588 | orchestrator | 2025-10-09 10:38:01 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:38:01.471228 | orchestrator | 2025-10-09 10:38:01 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:38:01.471817 | orchestrator | 2025-10-09 10:38:01 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:38:01.471838 | orchestrator | 2025-10-09 10:38:01 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:04.517984 | orchestrator | 2025-10-09 10:38:04 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:38:04.518131 | orchestrator | 2025-10-09 10:38:04 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:38:04.518148 | orchestrator | 2025-10-09 10:38:04 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:38:04.518160 | orchestrator | 2025-10-09 10:38:04 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:38:04.518171 | orchestrator | 2025-10-09 10:38:04 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:38:04.518223 | orchestrator | 2025-10-09 10:38:04 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:07.536745 | orchestrator | 2025-10-09 10:38:07 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:38:07.537095 | orchestrator | 2025-10-09 10:38:07 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:38:07.537976 | orchestrator | 2025-10-09 10:38:07 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:38:07.538752 | orchestrator | 2025-10-09 10:38:07 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:38:07.539694 | orchestrator | 2025-10-09 10:38:07 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:38:07.539717 | orchestrator | 2025-10-09 10:38:07 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:10.575083 | orchestrator | 2025-10-09 10:38:10 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state STARTED 2025-10-09 10:38:10.575818 | orchestrator | 2025-10-09 10:38:10 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:38:10.577750 | orchestrator | 2025-10-09 10:38:10 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:38:10.580229 | orchestrator | 2025-10-09 10:38:10 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:38:10.580973 | orchestrator | 2025-10-09 10:38:10 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:38:10.580997 | orchestrator | 2025-10-09 10:38:10 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:13.689996 | orchestrator | 2025-10-09 10:38:13.690160 | orchestrator | 2025-10-09 10:38:13.690215 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:38:13.690230 | orchestrator | 2025-10-09 10:38:13.690241 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:38:13.690253 | orchestrator | Thursday 09 October 2025 10:36:59 +0000 (0:00:00.377) 0:00:00.377 ****** 2025-10-09 10:38:13.690264 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:38:13.690277 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:38:13.690288 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:38:13.690299 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:38:13.690310 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:38:13.690337 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:38:13.690360 | orchestrator | ok: [testbed-manager] 2025-10-09 10:38:13.690371 | orchestrator | 2025-10-09 10:38:13.690383 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:38:13.690394 | orchestrator | Thursday 09 October 2025 10:37:00 +0000 (0:00:01.095) 0:00:01.473 ****** 2025-10-09 10:38:13.690405 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-10-09 10:38:13.690417 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-10-09 10:38:13.690428 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-10-09 10:38:13.690439 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-10-09 10:38:13.690450 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-10-09 10:38:13.690462 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-10-09 10:38:13.690473 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-10-09 10:38:13.690484 | orchestrator | 2025-10-09 10:38:13.690496 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-10-09 10:38:13.690507 | orchestrator | 2025-10-09 10:38:13.690518 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-10-09 10:38:13.690529 | orchestrator | Thursday 09 October 2025 10:37:01 +0000 (0:00:01.153) 0:00:02.626 ****** 2025-10-09 10:38:13.690541 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-10-09 10:38:13.690554 | orchestrator | 2025-10-09 10:38:13.690566 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-10-09 10:38:13.690579 | orchestrator | Thursday 09 October 2025 10:37:04 +0000 (0:00:03.138) 0:00:05.765 ****** 2025-10-09 10:38:13.690623 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-10-09 10:38:13.690635 | orchestrator | 2025-10-09 10:38:13.690647 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-10-09 10:38:13.690659 | orchestrator | Thursday 09 October 2025 10:37:08 +0000 (0:00:03.786) 0:00:09.551 ****** 2025-10-09 10:38:13.690673 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-10-09 10:38:13.690687 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-10-09 10:38:13.690700 | orchestrator | 2025-10-09 10:38:13.690713 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-10-09 10:38:13.690725 | orchestrator | Thursday 09 October 2025 10:37:13 +0000 (0:00:05.644) 0:00:15.196 ****** 2025-10-09 10:38:13.690737 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-09 10:38:13.690750 | orchestrator | 2025-10-09 10:38:13.690762 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-10-09 10:38:13.690775 | orchestrator | Thursday 09 October 2025 10:37:17 +0000 (0:00:03.509) 0:00:18.705 ****** 2025-10-09 10:38:13.690787 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-09 10:38:13.690800 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-10-09 10:38:13.690812 | orchestrator | 2025-10-09 10:38:13.690824 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-10-09 10:38:13.690837 | orchestrator | Thursday 09 October 2025 10:37:21 +0000 (0:00:04.014) 0:00:22.719 ****** 2025-10-09 10:38:13.690849 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-09 10:38:13.690862 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-10-09 10:38:13.690875 | orchestrator | 2025-10-09 10:38:13.690888 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-10-09 10:38:13.690901 | orchestrator | Thursday 09 October 2025 10:37:27 +0000 (0:00:06.347) 0:00:29.066 ****** 2025-10-09 10:38:13.690913 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-10-09 10:38:13.690925 | orchestrator | 2025-10-09 10:38:13.690936 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:38:13.690947 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:38:13.691092 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:38:13.691108 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:38:13.691119 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:38:13.691145 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:38:13.691201 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:38:13.691214 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:38:13.691225 | orchestrator | 2025-10-09 10:38:13.691236 | orchestrator | 2025-10-09 10:38:13.691247 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:38:13.691258 | orchestrator | Thursday 09 October 2025 10:37:33 +0000 (0:00:05.770) 0:00:34.836 ****** 2025-10-09 10:38:13.691269 | orchestrator | =============================================================================== 2025-10-09 10:38:13.691280 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.35s 2025-10-09 10:38:13.691291 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.77s 2025-10-09 10:38:13.691312 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.64s 2025-10-09 10:38:13.691323 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.01s 2025-10-09 10:38:13.691334 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.79s 2025-10-09 10:38:13.691345 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.51s 2025-10-09 10:38:13.691356 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 3.14s 2025-10-09 10:38:13.691367 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.15s 2025-10-09 10:38:13.691378 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.10s 2025-10-09 10:38:13.691388 | orchestrator | 2025-10-09 10:38:13.691399 | orchestrator | 2025-10-09 10:38:13.691410 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-10-09 10:38:13.691421 | orchestrator | 2025-10-09 10:38:13.691432 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-10-09 10:38:13.691443 | orchestrator | Thursday 09 October 2025 10:36:51 +0000 (0:00:00.300) 0:00:00.300 ****** 2025-10-09 10:38:13.691454 | orchestrator | changed: [testbed-manager] 2025-10-09 10:38:13.691465 | orchestrator | 2025-10-09 10:38:13.691476 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-10-09 10:38:13.691487 | orchestrator | Thursday 09 October 2025 10:36:52 +0000 (0:00:01.830) 0:00:02.130 ****** 2025-10-09 10:38:13.691498 | orchestrator | changed: [testbed-manager] 2025-10-09 10:38:13.691509 | orchestrator | 2025-10-09 10:38:13.691520 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-10-09 10:38:13.691531 | orchestrator | Thursday 09 October 2025 10:36:54 +0000 (0:00:01.205) 0:00:03.336 ****** 2025-10-09 10:38:13.691541 | orchestrator | changed: [testbed-manager] 2025-10-09 10:38:13.691552 | orchestrator | 2025-10-09 10:38:13.691563 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-10-09 10:38:13.691574 | orchestrator | Thursday 09 October 2025 10:36:55 +0000 (0:00:01.132) 0:00:04.468 ****** 2025-10-09 10:38:13.691585 | orchestrator | changed: [testbed-manager] 2025-10-09 10:38:13.691596 | orchestrator | 2025-10-09 10:38:13.691607 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-10-09 10:38:13.691618 | orchestrator | Thursday 09 October 2025 10:36:56 +0000 (0:00:01.539) 0:00:06.008 ****** 2025-10-09 10:38:13.691629 | orchestrator | changed: [testbed-manager] 2025-10-09 10:38:13.691639 | orchestrator | 2025-10-09 10:38:13.691650 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-10-09 10:38:13.691661 | orchestrator | Thursday 09 October 2025 10:36:58 +0000 (0:00:01.552) 0:00:07.561 ****** 2025-10-09 10:38:13.691672 | orchestrator | changed: [testbed-manager] 2025-10-09 10:38:13.691683 | orchestrator | 2025-10-09 10:38:13.691694 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-10-09 10:38:13.691705 | orchestrator | Thursday 09 October 2025 10:36:59 +0000 (0:00:01.171) 0:00:08.732 ****** 2025-10-09 10:38:13.691716 | orchestrator | changed: [testbed-manager] 2025-10-09 10:38:13.691729 | orchestrator | 2025-10-09 10:38:13.691741 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-10-09 10:38:13.691753 | orchestrator | Thursday 09 October 2025 10:37:01 +0000 (0:00:02.066) 0:00:10.799 ****** 2025-10-09 10:38:13.691852 | orchestrator | changed: [testbed-manager] 2025-10-09 10:38:13.691869 | orchestrator | 2025-10-09 10:38:13.691882 | orchestrator | TASK [Create admin user] ******************************************************* 2025-10-09 10:38:13.691894 | orchestrator | Thursday 09 October 2025 10:37:03 +0000 (0:00:01.767) 0:00:12.567 ****** 2025-10-09 10:38:13.691906 | orchestrator | changed: [testbed-manager] 2025-10-09 10:38:13.691918 | orchestrator | 2025-10-09 10:38:13.691930 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-10-09 10:38:13.691942 | orchestrator | Thursday 09 October 2025 10:37:47 +0000 (0:00:44.194) 0:00:56.761 ****** 2025-10-09 10:38:13.691963 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:38:13.691976 | orchestrator | 2025-10-09 10:38:13.691988 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-10-09 10:38:13.692000 | orchestrator | 2025-10-09 10:38:13.692012 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-10-09 10:38:13.692024 | orchestrator | Thursday 09 October 2025 10:37:47 +0000 (0:00:00.169) 0:00:56.931 ****** 2025-10-09 10:38:13.692036 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:38:13.692048 | orchestrator | 2025-10-09 10:38:13.692060 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-10-09 10:38:13.692072 | orchestrator | 2025-10-09 10:38:13.692083 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-10-09 10:38:13.692094 | orchestrator | Thursday 09 October 2025 10:37:49 +0000 (0:00:01.591) 0:00:58.522 ****** 2025-10-09 10:38:13.692104 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:38:13.692115 | orchestrator | 2025-10-09 10:38:13.692126 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-10-09 10:38:13.692136 | orchestrator | 2025-10-09 10:38:13.692147 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-10-09 10:38:13.692164 | orchestrator | Thursday 09 October 2025 10:38:00 +0000 (0:00:11.345) 0:01:09.868 ****** 2025-10-09 10:38:13.692208 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:38:13.692220 | orchestrator | 2025-10-09 10:38:13.692240 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:38:13.692252 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-10-09 10:38:13.692263 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:38:13.692274 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:38:13.692285 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:38:13.692296 | orchestrator | 2025-10-09 10:38:13.692307 | orchestrator | 2025-10-09 10:38:13.692318 | orchestrator | 2025-10-09 10:38:13.692329 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:38:13.692340 | orchestrator | Thursday 09 October 2025 10:38:11 +0000 (0:00:11.125) 0:01:20.993 ****** 2025-10-09 10:38:13.692351 | orchestrator | =============================================================================== 2025-10-09 10:38:13.692362 | orchestrator | Create admin user ------------------------------------------------------ 44.19s 2025-10-09 10:38:13.692373 | orchestrator | Restart ceph manager service ------------------------------------------- 24.06s 2025-10-09 10:38:13.692384 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.07s 2025-10-09 10:38:13.692395 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.83s 2025-10-09 10:38:13.692405 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.77s 2025-10-09 10:38:13.692416 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.55s 2025-10-09 10:38:13.692427 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.54s 2025-10-09 10:38:13.692438 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.21s 2025-10-09 10:38:13.692449 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.17s 2025-10-09 10:38:13.692460 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.13s 2025-10-09 10:38:13.692471 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.17s 2025-10-09 10:38:13.692482 | orchestrator | 2025-10-09 10:38:13 | INFO  | Task c4f20318-57a1-487a-9b5f-49c6133b007f is in state SUCCESS 2025-10-09 10:38:13.692493 | orchestrator | 2025-10-09 10:38:13 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:38:13.692511 | orchestrator | 2025-10-09 10:38:13 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:38:13.692523 | orchestrator | 2025-10-09 10:38:13 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:38:13.692801 | orchestrator | 2025-10-09 10:38:13 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:38:13.692824 | orchestrator | 2025-10-09 10:38:13 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:16.721733 | orchestrator | 2025-10-09 10:38:16 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:38:16.722889 | orchestrator | 2025-10-09 10:38:16 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:38:16.723613 | orchestrator | 2025-10-09 10:38:16 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:38:16.724529 | orchestrator | 2025-10-09 10:38:16 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:38:16.724552 | orchestrator | 2025-10-09 10:38:16 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:19.769148 | orchestrator | 2025-10-09 10:38:19 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:38:19.770427 | orchestrator | 2025-10-09 10:38:19 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:38:19.771087 | orchestrator | 2025-10-09 10:38:19 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:38:19.772163 | orchestrator | 2025-10-09 10:38:19 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:38:19.772204 | orchestrator | 2025-10-09 10:38:19 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:22.809662 | orchestrator | 2025-10-09 10:38:22 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:38:22.811068 | orchestrator | 2025-10-09 10:38:22 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:38:22.811700 | orchestrator | 2025-10-09 10:38:22 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:38:22.812468 | orchestrator | 2025-10-09 10:38:22 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:38:22.812494 | orchestrator | 2025-10-09 10:38:22 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:25.836573 | orchestrator | 2025-10-09 10:38:25 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:38:25.839004 | orchestrator | 2025-10-09 10:38:25 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:38:25.839802 | orchestrator | 2025-10-09 10:38:25 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:38:25.840140 | orchestrator | 2025-10-09 10:38:25 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:38:25.840387 | orchestrator | 2025-10-09 10:38:25 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:28.863305 | orchestrator | 2025-10-09 10:38:28 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:38:28.863521 | orchestrator | 2025-10-09 10:38:28 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:38:28.864115 | orchestrator | 2025-10-09 10:38:28 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:38:28.864742 | orchestrator | 2025-10-09 10:38:28 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:38:28.864798 | orchestrator | 2025-10-09 10:38:28 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:31.889525 | orchestrator | 2025-10-09 10:38:31 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:38:31.889646 | orchestrator | 2025-10-09 10:38:31 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:38:31.890154 | orchestrator | 2025-10-09 10:38:31 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:38:31.890782 | orchestrator | 2025-10-09 10:38:31 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:38:31.890805 | orchestrator | 2025-10-09 10:38:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:34.914452 | orchestrator | 2025-10-09 10:38:34 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:38:34.914690 | orchestrator | 2025-10-09 10:38:34 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:38:34.915717 | orchestrator | 2025-10-09 10:38:34 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:38:34.917622 | orchestrator | 2025-10-09 10:38:34 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:38:34.917729 | orchestrator | 2025-10-09 10:38:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:37.938994 | orchestrator | 2025-10-09 10:38:37 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:38:37.939813 | orchestrator | 2025-10-09 10:38:37 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:38:37.940348 | orchestrator | 2025-10-09 10:38:37 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:38:37.942294 | orchestrator | 2025-10-09 10:38:37 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:38:37.942390 | orchestrator | 2025-10-09 10:38:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:40.965501 | orchestrator | 2025-10-09 10:38:40 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:38:40.966528 | orchestrator | 2025-10-09 10:38:40 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:38:40.967854 | orchestrator | 2025-10-09 10:38:40 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:38:40.967879 | orchestrator | 2025-10-09 10:38:40 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:38:40.967891 | orchestrator | 2025-10-09 10:38:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:44.064682 | orchestrator | 2025-10-09 10:38:44 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:38:44.065068 | orchestrator | 2025-10-09 10:38:44 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:38:44.065905 | orchestrator | 2025-10-09 10:38:44 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:38:44.066980 | orchestrator | 2025-10-09 10:38:44 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:38:44.067019 | orchestrator | 2025-10-09 10:38:44 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:47.090963 | orchestrator | 2025-10-09 10:38:47 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:38:47.091069 | orchestrator | 2025-10-09 10:38:47 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:38:47.091417 | orchestrator | 2025-10-09 10:38:47 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:38:47.093740 | orchestrator | 2025-10-09 10:38:47 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:38:47.093768 | orchestrator | 2025-10-09 10:38:47 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:50.140238 | orchestrator | 2025-10-09 10:38:50 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:38:50.140361 | orchestrator | 2025-10-09 10:38:50 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:38:50.141218 | orchestrator | 2025-10-09 10:38:50 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:38:50.143677 | orchestrator | 2025-10-09 10:38:50 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:38:50.143775 | orchestrator | 2025-10-09 10:38:50 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:53.192561 | orchestrator | 2025-10-09 10:38:53 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:38:53.193495 | orchestrator | 2025-10-09 10:38:53 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:38:53.196194 | orchestrator | 2025-10-09 10:38:53 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:38:53.198105 | orchestrator | 2025-10-09 10:38:53 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:38:53.198132 | orchestrator | 2025-10-09 10:38:53 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:56.255127 | orchestrator | 2025-10-09 10:38:56 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:38:56.257511 | orchestrator | 2025-10-09 10:38:56 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:38:56.258951 | orchestrator | 2025-10-09 10:38:56 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:38:56.261504 | orchestrator | 2025-10-09 10:38:56 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:38:56.261556 | orchestrator | 2025-10-09 10:38:56 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:38:59.305591 | orchestrator | 2025-10-09 10:38:59 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:38:59.307899 | orchestrator | 2025-10-09 10:38:59 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:38:59.310411 | orchestrator | 2025-10-09 10:38:59 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:38:59.311778 | orchestrator | 2025-10-09 10:38:59 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:38:59.311803 | orchestrator | 2025-10-09 10:38:59 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:02.364705 | orchestrator | 2025-10-09 10:39:02 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:39:02.367245 | orchestrator | 2025-10-09 10:39:02 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:39:02.368761 | orchestrator | 2025-10-09 10:39:02 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:39:02.371289 | orchestrator | 2025-10-09 10:39:02 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:39:02.372235 | orchestrator | 2025-10-09 10:39:02 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:05.413549 | orchestrator | 2025-10-09 10:39:05 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:39:05.414438 | orchestrator | 2025-10-09 10:39:05 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:39:05.415557 | orchestrator | 2025-10-09 10:39:05 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:39:05.416641 | orchestrator | 2025-10-09 10:39:05 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:39:05.416663 | orchestrator | 2025-10-09 10:39:05 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:08.458778 | orchestrator | 2025-10-09 10:39:08 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:39:08.460054 | orchestrator | 2025-10-09 10:39:08 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:39:08.460744 | orchestrator | 2025-10-09 10:39:08 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:39:08.463125 | orchestrator | 2025-10-09 10:39:08 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:39:08.463152 | orchestrator | 2025-10-09 10:39:08 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:11.601898 | orchestrator | 2025-10-09 10:39:11 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:39:11.602488 | orchestrator | 2025-10-09 10:39:11 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:39:11.604514 | orchestrator | 2025-10-09 10:39:11 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:39:11.607117 | orchestrator | 2025-10-09 10:39:11 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:39:11.607134 | orchestrator | 2025-10-09 10:39:11 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:14.641833 | orchestrator | 2025-10-09 10:39:14 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:39:14.643148 | orchestrator | 2025-10-09 10:39:14 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:39:14.644682 | orchestrator | 2025-10-09 10:39:14 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:39:14.645904 | orchestrator | 2025-10-09 10:39:14 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:39:14.645927 | orchestrator | 2025-10-09 10:39:14 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:17.697618 | orchestrator | 2025-10-09 10:39:17 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:39:17.700685 | orchestrator | 2025-10-09 10:39:17 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:39:17.703515 | orchestrator | 2025-10-09 10:39:17 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:39:17.705521 | orchestrator | 2025-10-09 10:39:17 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:39:17.705565 | orchestrator | 2025-10-09 10:39:17 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:20.750792 | orchestrator | 2025-10-09 10:39:20 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:39:20.751039 | orchestrator | 2025-10-09 10:39:20 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:39:20.752090 | orchestrator | 2025-10-09 10:39:20 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:39:20.753468 | orchestrator | 2025-10-09 10:39:20 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:39:20.753529 | orchestrator | 2025-10-09 10:39:20 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:23.792471 | orchestrator | 2025-10-09 10:39:23 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:39:23.794255 | orchestrator | 2025-10-09 10:39:23 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:39:23.796550 | orchestrator | 2025-10-09 10:39:23 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:39:23.800259 | orchestrator | 2025-10-09 10:39:23 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:39:23.800372 | orchestrator | 2025-10-09 10:39:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:26.839313 | orchestrator | 2025-10-09 10:39:26 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:39:26.839551 | orchestrator | 2025-10-09 10:39:26 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:39:26.840324 | orchestrator | 2025-10-09 10:39:26 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:39:26.841450 | orchestrator | 2025-10-09 10:39:26 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:39:26.841526 | orchestrator | 2025-10-09 10:39:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:29.892781 | orchestrator | 2025-10-09 10:39:29 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:39:29.894221 | orchestrator | 2025-10-09 10:39:29 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:39:29.895765 | orchestrator | 2025-10-09 10:39:29 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:39:29.898664 | orchestrator | 2025-10-09 10:39:29 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:39:29.898780 | orchestrator | 2025-10-09 10:39:29 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:32.944083 | orchestrator | 2025-10-09 10:39:32 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:39:32.944262 | orchestrator | 2025-10-09 10:39:32 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:39:32.944351 | orchestrator | 2025-10-09 10:39:32 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:39:32.944467 | orchestrator | 2025-10-09 10:39:32 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:39:32.944485 | orchestrator | 2025-10-09 10:39:32 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:35.994846 | orchestrator | 2025-10-09 10:39:35 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:39:35.995089 | orchestrator | 2025-10-09 10:39:35 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:39:35.995871 | orchestrator | 2025-10-09 10:39:35 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:39:35.996788 | orchestrator | 2025-10-09 10:39:35 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:39:35.996812 | orchestrator | 2025-10-09 10:39:35 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:39.050001 | orchestrator | 2025-10-09 10:39:39 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:39:39.050477 | orchestrator | 2025-10-09 10:39:39 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:39:39.054293 | orchestrator | 2025-10-09 10:39:39 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:39:39.054342 | orchestrator | 2025-10-09 10:39:39 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:39:39.054351 | orchestrator | 2025-10-09 10:39:39 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:42.125639 | orchestrator | 2025-10-09 10:39:42 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:39:42.125854 | orchestrator | 2025-10-09 10:39:42 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:39:42.126773 | orchestrator | 2025-10-09 10:39:42 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:39:42.127094 | orchestrator | 2025-10-09 10:39:42 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:39:42.127116 | orchestrator | 2025-10-09 10:39:42 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:45.152826 | orchestrator | 2025-10-09 10:39:45 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:39:45.152921 | orchestrator | 2025-10-09 10:39:45 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:39:45.152935 | orchestrator | 2025-10-09 10:39:45 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:39:45.153376 | orchestrator | 2025-10-09 10:39:45 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:39:45.153405 | orchestrator | 2025-10-09 10:39:45 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:48.182264 | orchestrator | 2025-10-09 10:39:48 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:39:48.182942 | orchestrator | 2025-10-09 10:39:48 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:39:48.185837 | orchestrator | 2025-10-09 10:39:48 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:39:48.187022 | orchestrator | 2025-10-09 10:39:48 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:39:48.187075 | orchestrator | 2025-10-09 10:39:48 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:51.244774 | orchestrator | 2025-10-09 10:39:51 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:39:51.244868 | orchestrator | 2025-10-09 10:39:51 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:39:51.244882 | orchestrator | 2025-10-09 10:39:51 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:39:51.244893 | orchestrator | 2025-10-09 10:39:51 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:39:51.244905 | orchestrator | 2025-10-09 10:39:51 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:54.295551 | orchestrator | 2025-10-09 10:39:54 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:39:54.295647 | orchestrator | 2025-10-09 10:39:54 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:39:54.295661 | orchestrator | 2025-10-09 10:39:54 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:39:54.295672 | orchestrator | 2025-10-09 10:39:54 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:39:54.295684 | orchestrator | 2025-10-09 10:39:54 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:39:57.424546 | orchestrator | 2025-10-09 10:39:57 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:39:57.424736 | orchestrator | 2025-10-09 10:39:57 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:39:57.425378 | orchestrator | 2025-10-09 10:39:57 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:39:57.426126 | orchestrator | 2025-10-09 10:39:57 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:39:57.426317 | orchestrator | 2025-10-09 10:39:57 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:00.477849 | orchestrator | 2025-10-09 10:40:00 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:40:00.480672 | orchestrator | 2025-10-09 10:40:00 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:40:00.483686 | orchestrator | 2025-10-09 10:40:00 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:40:00.486269 | orchestrator | 2025-10-09 10:40:00 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:40:00.486296 | orchestrator | 2025-10-09 10:40:00 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:03.539122 | orchestrator | 2025-10-09 10:40:03 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:40:03.543687 | orchestrator | 2025-10-09 10:40:03 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:40:03.543722 | orchestrator | 2025-10-09 10:40:03 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:40:03.545328 | orchestrator | 2025-10-09 10:40:03 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:40:03.545895 | orchestrator | 2025-10-09 10:40:03 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:06.596863 | orchestrator | 2025-10-09 10:40:06 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:40:06.597073 | orchestrator | 2025-10-09 10:40:06 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:40:06.600781 | orchestrator | 2025-10-09 10:40:06 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:40:06.602476 | orchestrator | 2025-10-09 10:40:06 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:40:06.603247 | orchestrator | 2025-10-09 10:40:06 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:09.649994 | orchestrator | 2025-10-09 10:40:09 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:40:09.650089 | orchestrator | 2025-10-09 10:40:09 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:40:09.651009 | orchestrator | 2025-10-09 10:40:09 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:40:09.652421 | orchestrator | 2025-10-09 10:40:09 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state STARTED 2025-10-09 10:40:09.652941 | orchestrator | 2025-10-09 10:40:09 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:12.692078 | orchestrator | 2025-10-09 10:40:12 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:40:12.692534 | orchestrator | 2025-10-09 10:40:12 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:40:12.693683 | orchestrator | 2025-10-09 10:40:12 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:40:12.696916 | orchestrator | 2025-10-09 10:40:12 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:40:12.698593 | orchestrator | 2025-10-09 10:40:12 | INFO  | Task 501e57d0-da60-4ece-b9b8-0b82a50bb036 is in state SUCCESS 2025-10-09 10:40:12.700306 | orchestrator | 2025-10-09 10:40:12.700336 | orchestrator | 2025-10-09 10:40:12.700349 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:40:12.700360 | orchestrator | 2025-10-09 10:40:12.700371 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:40:12.700382 | orchestrator | Thursday 09 October 2025 10:36:59 +0000 (0:00:00.309) 0:00:00.309 ****** 2025-10-09 10:40:12.700394 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:40:12.700406 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:40:12.700416 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:40:12.700427 | orchestrator | 2025-10-09 10:40:12.700438 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:40:12.700449 | orchestrator | Thursday 09 October 2025 10:36:59 +0000 (0:00:00.341) 0:00:00.650 ****** 2025-10-09 10:40:12.700460 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-10-09 10:40:12.700471 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-10-09 10:40:12.700482 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-10-09 10:40:12.700492 | orchestrator | 2025-10-09 10:40:12.700503 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-10-09 10:40:12.700514 | orchestrator | 2025-10-09 10:40:12.700524 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-10-09 10:40:12.700535 | orchestrator | Thursday 09 October 2025 10:37:00 +0000 (0:00:00.634) 0:00:01.285 ****** 2025-10-09 10:40:12.700546 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:40:12.700558 | orchestrator | 2025-10-09 10:40:12.700568 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-10-09 10:40:12.700579 | orchestrator | Thursday 09 October 2025 10:37:01 +0000 (0:00:00.931) 0:00:02.217 ****** 2025-10-09 10:40:12.700590 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-10-09 10:40:12.700602 | orchestrator | 2025-10-09 10:40:12.700613 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-10-09 10:40:12.700623 | orchestrator | Thursday 09 October 2025 10:37:05 +0000 (0:00:04.326) 0:00:06.543 ****** 2025-10-09 10:40:12.700655 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-10-09 10:40:12.700667 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-10-09 10:40:12.700678 | orchestrator | 2025-10-09 10:40:12.700689 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-10-09 10:40:12.700699 | orchestrator | Thursday 09 October 2025 10:37:11 +0000 (0:00:05.997) 0:00:12.540 ****** 2025-10-09 10:40:12.700710 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-10-09 10:40:12.700721 | orchestrator | 2025-10-09 10:40:12.700732 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-10-09 10:40:12.700742 | orchestrator | Thursday 09 October 2025 10:37:15 +0000 (0:00:03.795) 0:00:16.336 ****** 2025-10-09 10:40:12.700753 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-09 10:40:12.700764 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-10-09 10:40:12.700775 | orchestrator | 2025-10-09 10:40:12.700785 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-10-09 10:40:12.700796 | orchestrator | Thursday 09 October 2025 10:37:19 +0000 (0:00:04.218) 0:00:20.554 ****** 2025-10-09 10:40:12.700807 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-09 10:40:12.700818 | orchestrator | 2025-10-09 10:40:12.700828 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-10-09 10:40:12.700839 | orchestrator | Thursday 09 October 2025 10:37:23 +0000 (0:00:04.141) 0:00:24.696 ****** 2025-10-09 10:40:12.700850 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-10-09 10:40:12.700873 | orchestrator | 2025-10-09 10:40:12.700884 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-10-09 10:40:12.700895 | orchestrator | Thursday 09 October 2025 10:37:28 +0000 (0:00:04.681) 0:00:29.378 ****** 2025-10-09 10:40:12.700933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:40:12.700955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:40:12.700974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:40:12.700995 | orchestrator | 2025-10-09 10:40:12.701008 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-10-09 10:40:12.701020 | orchestrator | Thursday 09 October 2025 10:37:33 +0000 (0:00:05.278) 0:00:34.656 ****** 2025-10-09 10:40:12.701033 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:40:12.701045 | orchestrator | 2025-10-09 10:40:12.701064 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-10-09 10:40:12.701077 | orchestrator | Thursday 09 October 2025 10:37:34 +0000 (0:00:00.682) 0:00:35.339 ****** 2025-10-09 10:40:12.701089 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:40:12.701101 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:40:12.701113 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:40:12.701125 | orchestrator | 2025-10-09 10:40:12.701138 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-10-09 10:40:12.701150 | orchestrator | Thursday 09 October 2025 10:37:39 +0000 (0:00:04.514) 0:00:39.853 ****** 2025-10-09 10:40:12.701181 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-10-09 10:40:12.701194 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-10-09 10:40:12.701207 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-10-09 10:40:12.701219 | orchestrator | 2025-10-09 10:40:12.701232 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-10-09 10:40:12.701244 | orchestrator | Thursday 09 October 2025 10:37:40 +0000 (0:00:01.683) 0:00:41.537 ****** 2025-10-09 10:40:12.701257 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-10-09 10:40:12.701268 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-10-09 10:40:12.701279 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-10-09 10:40:12.701290 | orchestrator | 2025-10-09 10:40:12.701301 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-10-09 10:40:12.701312 | orchestrator | Thursday 09 October 2025 10:37:41 +0000 (0:00:01.111) 0:00:42.648 ****** 2025-10-09 10:40:12.701323 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:40:12.701333 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:40:12.701344 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:40:12.701355 | orchestrator | 2025-10-09 10:40:12.701366 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-10-09 10:40:12.701383 | orchestrator | Thursday 09 October 2025 10:37:42 +0000 (0:00:00.630) 0:00:43.279 ****** 2025-10-09 10:40:12.701394 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:12.701405 | orchestrator | 2025-10-09 10:40:12.701415 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-10-09 10:40:12.701426 | orchestrator | Thursday 09 October 2025 10:37:42 +0000 (0:00:00.336) 0:00:43.615 ****** 2025-10-09 10:40:12.701437 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:12.701448 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:12.701458 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:12.701469 | orchestrator | 2025-10-09 10:40:12.701480 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-10-09 10:40:12.701491 | orchestrator | Thursday 09 October 2025 10:37:43 +0000 (0:00:00.301) 0:00:43.917 ****** 2025-10-09 10:40:12.701501 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:40:12.701512 | orchestrator | 2025-10-09 10:40:12.701523 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-10-09 10:40:12.701534 | orchestrator | Thursday 09 October 2025 10:37:43 +0000 (0:00:00.660) 0:00:44.577 ****** 2025-10-09 10:40:12.701557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:40:12.701571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:40:12.701590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:40:12.701602 | orchestrator | 2025-10-09 10:40:12.701613 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-10-09 10:40:12.701628 | orchestrator | Thursday 09 October 2025 10:37:50 +0000 (0:00:06.551) 0:00:51.129 ****** 2025-10-09 10:40:12.701649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-09 10:40:12.701667 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:12.701680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-09 10:40:12.701692 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:12.701715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-09 10:40:12.701728 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:12.701739 | orchestrator | 2025-10-09 10:40:12.701751 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-10-09 10:40:12.701770 | orchestrator | Thursday 09 October 2025 10:37:56 +0000 (0:00:06.012) 0:00:57.141 ****** 2025-10-09 10:40:12.701782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-09 10:40:12.701794 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:12.701824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-09 10:40:12.701837 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:12.701848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-10-09 10:40:12.701866 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:12.701878 | orchestrator | 2025-10-09 10:40:12.701889 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-10-09 10:40:12.701900 | orchestrator | Thursday 09 October 2025 10:38:00 +0000 (0:00:03.740) 0:01:00.882 ****** 2025-10-09 10:40:12.701911 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:12.701922 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:12.701932 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:12.701943 | orchestrator | 2025-10-09 10:40:12.701955 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-10-09 10:40:12.701966 | orchestrator | Thursday 09 October 2025 10:38:04 +0000 (0:00:04.055) 0:01:04.937 ****** 2025-10-09 10:40:12.701987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:40:12.702001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:40:12.702065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:40:12.702080 | orchestrator | 2025-10-09 10:40:12.702092 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-10-09 10:40:12.702108 | orchestrator | Thursday 09 October 2025 10:38:09 +0000 (0:00:05.706) 0:01:10.644 ****** 2025-10-09 10:40:12.702119 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:40:12.702130 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:40:12.702141 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:40:12.702152 | orchestrator | 2025-10-09 10:40:12.702192 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-10-09 10:40:12.702204 | orchestrator | Thursday 09 October 2025 10:38:19 +0000 (0:00:09.274) 0:01:19.918 ****** 2025-10-09 10:40:12.702215 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:12.702233 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:12.702245 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:12.702256 | orchestrator | 2025-10-09 10:40:12.702267 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-10-09 10:40:12.702285 | orchestrator | Thursday 09 October 2025 10:38:26 +0000 (0:00:07.013) 0:01:26.932 ****** 2025-10-09 10:40:12.702297 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:12.702308 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:12.702319 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:12.702330 | orchestrator | 2025-10-09 10:40:12.702341 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-10-09 10:40:12.702351 | orchestrator | Thursday 09 October 2025 10:38:33 +0000 (0:00:07.474) 0:01:34.406 ****** 2025-10-09 10:40:12.702363 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:12.702373 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:12.702384 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:12.702395 | orchestrator | 2025-10-09 10:40:12.702406 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-10-09 10:40:12.702417 | orchestrator | Thursday 09 October 2025 10:38:38 +0000 (0:00:04.445) 0:01:38.851 ****** 2025-10-09 10:40:12.702428 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:12.702439 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:12.702450 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:12.702461 | orchestrator | 2025-10-09 10:40:12.702472 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-10-09 10:40:12.702483 | orchestrator | Thursday 09 October 2025 10:38:43 +0000 (0:00:04.906) 0:01:43.757 ****** 2025-10-09 10:40:12.702494 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:12.702505 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:12.702516 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:12.702527 | orchestrator | 2025-10-09 10:40:12.702537 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-10-09 10:40:12.702548 | orchestrator | Thursday 09 October 2025 10:38:43 +0000 (0:00:00.283) 0:01:44.041 ****** 2025-10-09 10:40:12.702559 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-10-09 10:40:12.702570 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:12.702581 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-10-09 10:40:12.702592 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:12.702603 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-10-09 10:40:12.702614 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:12.702626 | orchestrator | 2025-10-09 10:40:12.702637 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-10-09 10:40:12.702647 | orchestrator | Thursday 09 October 2025 10:38:46 +0000 (0:00:03.177) 0:01:47.218 ****** 2025-10-09 10:40:12.702659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:40:12.702691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:40:12.702705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-10-09 10:40:12.702724 | orchestrator | 2025-10-09 10:40:12.702735 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-10-09 10:40:12.702746 | orchestrator | Thursday 09 October 2025 10:38:50 +0000 (0:00:03.927) 0:01:51.146 ****** 2025-10-09 10:40:12.702757 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:12.702768 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:12.702779 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:12.702790 | orchestrator | 2025-10-09 10:40:12.702800 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-10-09 10:40:12.702811 | orchestrator | Thursday 09 October 2025 10:38:50 +0000 (0:00:00.397) 0:01:51.543 ****** 2025-10-09 10:40:12.702822 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:40:12.702833 | orchestrator | 2025-10-09 10:40:12.702844 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-10-09 10:40:12.702855 | orchestrator | Thursday 09 October 2025 10:38:52 +0000 (0:00:02.133) 0:01:53.676 ****** 2025-10-09 10:40:12.702866 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:40:12.702877 | orchestrator | 2025-10-09 10:40:12.702892 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-10-09 10:40:12.702903 | orchestrator | Thursday 09 October 2025 10:38:55 +0000 (0:00:02.391) 0:01:56.068 ****** 2025-10-09 10:40:12.702914 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:40:12.702925 | orchestrator | 2025-10-09 10:40:12.702935 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-10-09 10:40:12.702946 | orchestrator | Thursday 09 October 2025 10:38:57 +0000 (0:00:02.106) 0:01:58.174 ****** 2025-10-09 10:40:12.702957 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:40:12.702968 | orchestrator | 2025-10-09 10:40:12.702979 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-10-09 10:40:12.702990 | orchestrator | Thursday 09 October 2025 10:39:26 +0000 (0:00:28.672) 0:02:26.847 ****** 2025-10-09 10:40:12.703001 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:40:12.703012 | orchestrator | 2025-10-09 10:40:12.703028 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-10-09 10:40:12.703040 | orchestrator | Thursday 09 October 2025 10:39:28 +0000 (0:00:02.301) 0:02:29.149 ****** 2025-10-09 10:40:12.703051 | orchestrator | 2025-10-09 10:40:12.703062 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-10-09 10:40:12.703073 | orchestrator | Thursday 09 October 2025 10:39:28 +0000 (0:00:00.068) 0:02:29.217 ****** 2025-10-09 10:40:12.703083 | orchestrator | 2025-10-09 10:40:12.703094 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-10-09 10:40:12.703105 | orchestrator | Thursday 09 October 2025 10:39:28 +0000 (0:00:00.065) 0:02:29.282 ****** 2025-10-09 10:40:12.703116 | orchestrator | 2025-10-09 10:40:12.703127 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-10-09 10:40:12.703138 | orchestrator | Thursday 09 October 2025 10:39:28 +0000 (0:00:00.071) 0:02:29.354 ****** 2025-10-09 10:40:12.703148 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:40:12.703159 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:40:12.703217 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:40:12.703228 | orchestrator | 2025-10-09 10:40:12.703239 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:40:12.703251 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-10-09 10:40:12.703264 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-10-09 10:40:12.703275 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-10-09 10:40:12.703286 | orchestrator | 2025-10-09 10:40:12.703297 | orchestrator | 2025-10-09 10:40:12.703307 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:40:12.703326 | orchestrator | Thursday 09 October 2025 10:40:09 +0000 (0:00:41.155) 0:03:10.509 ****** 2025-10-09 10:40:12.703337 | orchestrator | =============================================================================== 2025-10-09 10:40:12.703347 | orchestrator | glance : Restart glance-api container ---------------------------------- 41.16s 2025-10-09 10:40:12.703358 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.67s 2025-10-09 10:40:12.703367 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 9.27s 2025-10-09 10:40:12.703377 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 7.47s 2025-10-09 10:40:12.703387 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 7.01s 2025-10-09 10:40:12.703396 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 6.55s 2025-10-09 10:40:12.703406 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 6.01s 2025-10-09 10:40:12.703416 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.00s 2025-10-09 10:40:12.703425 | orchestrator | glance : Copying over config.json files for services -------------------- 5.71s 2025-10-09 10:40:12.703435 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.28s 2025-10-09 10:40:12.703445 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.91s 2025-10-09 10:40:12.703454 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.68s 2025-10-09 10:40:12.703464 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.51s 2025-10-09 10:40:12.703474 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.45s 2025-10-09 10:40:12.703483 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.33s 2025-10-09 10:40:12.703493 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.22s 2025-10-09 10:40:12.703502 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 4.14s 2025-10-09 10:40:12.703512 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.06s 2025-10-09 10:40:12.703522 | orchestrator | glance : Check glance containers ---------------------------------------- 3.93s 2025-10-09 10:40:12.703531 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.80s 2025-10-09 10:40:12.703541 | orchestrator | 2025-10-09 10:40:12 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:15.748659 | orchestrator | 2025-10-09 10:40:15 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:40:15.752717 | orchestrator | 2025-10-09 10:40:15 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:40:15.754932 | orchestrator | 2025-10-09 10:40:15 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:40:15.756857 | orchestrator | 2025-10-09 10:40:15 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:40:15.756955 | orchestrator | 2025-10-09 10:40:15 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:18.798156 | orchestrator | 2025-10-09 10:40:18 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:40:18.798718 | orchestrator | 2025-10-09 10:40:18 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:40:18.801526 | orchestrator | 2025-10-09 10:40:18 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:40:18.803738 | orchestrator | 2025-10-09 10:40:18 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state STARTED 2025-10-09 10:40:18.803760 | orchestrator | 2025-10-09 10:40:18 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:21.841370 | orchestrator | 2025-10-09 10:40:21 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:40:21.841937 | orchestrator | 2025-10-09 10:40:21 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:40:21.843519 | orchestrator | 2025-10-09 10:40:21 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:40:21.844382 | orchestrator | 2025-10-09 10:40:21 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:40:21.847477 | orchestrator | 2025-10-09 10:40:21 | INFO  | Task 87e630f0-8497-45aa-9e5a-a9ac01f93664 is in state SUCCESS 2025-10-09 10:40:21.849440 | orchestrator | 2025-10-09 10:40:21.849471 | orchestrator | 2025-10-09 10:40:21.849483 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:40:21.849496 | orchestrator | 2025-10-09 10:40:21.849507 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:40:21.849519 | orchestrator | Thursday 09 October 2025 10:36:51 +0000 (0:00:00.297) 0:00:00.297 ****** 2025-10-09 10:40:21.849567 | orchestrator | ok: [testbed-manager] 2025-10-09 10:40:21.849580 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:40:21.849593 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:40:21.849604 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:40:21.849615 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:40:21.849626 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:40:21.849636 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:40:21.849647 | orchestrator | 2025-10-09 10:40:21.849658 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:40:21.849670 | orchestrator | Thursday 09 October 2025 10:36:52 +0000 (0:00:00.884) 0:00:01.181 ****** 2025-10-09 10:40:21.849681 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-10-09 10:40:21.849693 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-10-09 10:40:21.849703 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-10-09 10:40:21.849714 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-10-09 10:40:21.849725 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-10-09 10:40:21.849736 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-10-09 10:40:21.849747 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-10-09 10:40:21.849757 | orchestrator | 2025-10-09 10:40:21.849768 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-10-09 10:40:21.849779 | orchestrator | 2025-10-09 10:40:21.849790 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-10-09 10:40:21.849801 | orchestrator | Thursday 09 October 2025 10:36:52 +0000 (0:00:00.798) 0:00:01.980 ****** 2025-10-09 10:40:21.849814 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:40:21.849999 | orchestrator | 2025-10-09 10:40:21.850055 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-10-09 10:40:21.850069 | orchestrator | Thursday 09 October 2025 10:36:54 +0000 (0:00:01.736) 0:00:03.716 ****** 2025-10-09 10:40:21.850085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.850120 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-10-09 10:40:21.850157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.850193 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.850222 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.850236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.850249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.850263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.850276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.850301 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.850318 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.850367 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.850390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.850430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.850443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.850455 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.850467 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.850492 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.850503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.850522 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-10-09 10:40:21.850538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.850551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.850562 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.850581 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.850597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.850609 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.850620 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.850638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.850651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.850662 | orchestrator | 2025-10-09 10:40:21.850673 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-10-09 10:40:21.850685 | orchestrator | Thursday 09 October 2025 10:36:58 +0000 (0:00:04.068) 0:00:07.784 ****** 2025-10-09 10:40:21.850696 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:40:21.850708 | orchestrator | 2025-10-09 10:40:21.850878 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-10-09 10:40:21.850893 | orchestrator | Thursday 09 October 2025 10:37:00 +0000 (0:00:01.804) 0:00:09.588 ****** 2025-10-09 10:40:21.850913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.850926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.850943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.850955 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-10-09 10:40:21.850974 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.850987 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.850998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.851010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.851028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.851040 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.851056 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.851068 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.851087 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.851099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.851112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.851130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.851142 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.851199 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.851213 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.851225 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.851244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.851289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.851301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.851326 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.851339 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-10-09 10:40:21.851352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.851364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.851385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.851459 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.851482 | orchestrator | 2025-10-09 10:40:21.851493 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-10-09 10:40:21.851504 | orchestrator | Thursday 09 October 2025 10:37:06 +0000 (0:00:06.228) 0:00:15.817 ****** 2025-10-09 10:40:21.851547 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-10-09 10:40:21.851560 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:40:21.851577 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:40:21.851589 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-10-09 10:40:21.851610 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:40:21.851622 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:40:21.851634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:40:21.851654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:40:21.851665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:40:21.851677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:40:21.851807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:40:21.851822 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:21.851834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:40:21.851846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:40:21.851864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:40:21.851885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:40:21.851897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:40:21.851908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:40:21.851920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:40:21.851937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:40:21.851949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:40:21.851961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:40:21.851972 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:21.851984 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:21.852003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:40:21.852021 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:40:21.852033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:40:21.852045 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:40:21.852056 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:40:21.852068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:40:21.852140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:40:21.852154 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:40:21.852219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:40:21.852232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:40:21.852287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:40:21.852301 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:40:21.852313 | orchestrator | 2025-10-09 10:40:21.852324 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-10-09 10:40:21.852335 | orchestrator | Thursday 09 October 2025 10:37:08 +0000 (0:00:01.808) 0:00:17.625 ****** 2025-10-09 10:40:21.852347 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-10-09 10:40:21.852359 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:40:21.852371 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:40:21.852388 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-10-09 10:40:21.852409 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:40:21.852429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:40:21.852441 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:40:21.852453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:40:21.852464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:40:21.852476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:40:21.852488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:40:21.852499 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:21.852516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:40:21.852528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:40:21.852551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:40:21.852569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:40:21.852581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:40:21.852592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:40:21.852604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:40:21.852615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:40:21.852631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:40:21.852652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-10-09 10:40:21.852663 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:21.852674 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:21.852691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:40:21.852703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:40:21.852715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:40:21.852726 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:40:21.852737 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:40:21.852749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:40:21.852765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:40:21.852783 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:40:21.852795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-10-09 10:40:21.852806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-10-09 10:40:21.853610 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-10-09 10:40:21.853636 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:40:21.853650 | orchestrator | 2025-10-09 10:40:21.853662 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-10-09 10:40:21.853673 | orchestrator | Thursday 09 October 2025 10:37:10 +0000 (0:00:02.341) 0:00:19.967 ****** 2025-10-09 10:40:21.853685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.853697 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-10-09 10:40:21.853709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.853729 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.853752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.853764 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.853807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.853822 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.853834 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.853847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.853859 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.853884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.853897 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.853910 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.853951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.853965 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.854002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.854062 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.854077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.854103 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-10-09 10:40:21.854117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.854185 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.854202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.854215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.854228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.854248 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.854267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.854279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.854293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.854306 | orchestrator | 2025-10-09 10:40:21.854319 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-10-09 10:40:21.854331 | orchestrator | Thursday 09 October 2025 10:37:17 +0000 (0:00:06.276) 0:00:26.244 ****** 2025-10-09 10:40:21.854343 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:40:21.854356 | orchestrator | 2025-10-09 10:40:21.854368 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-10-09 10:40:21.854410 | orchestrator | Thursday 09 October 2025 10:37:18 +0000 (0:00:01.434) 0:00:27.678 ****** 2025-10-09 10:40:21.854426 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089781, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.156765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854439 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089781, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.156765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854459 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1089796, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1618035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854472 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1089796, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1618035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854490 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089781, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.156765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854504 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089781, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.156765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854544 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089781, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.156765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854558 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1089774, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1556566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854569 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1089774, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1556566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854592 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1089796, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1618035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854604 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089781, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.156765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:40:21.854621 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1089796, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1618035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854633 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089781, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.156765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854673 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1089774, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1556566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854765 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1089774, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1556566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854778 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1089796, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1618035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854799 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089792, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1598823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854811 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1089796, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1618035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854828 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089792, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1598823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854840 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089792, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1598823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854887 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089792, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1598823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854900 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1089774, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1556566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854912 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1089772, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1535563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854930 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1089772, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1535563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854942 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1089772, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1535563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854958 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1089774, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1556566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854970 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1089772, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1535563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.854981 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089782, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.156834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855022 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1089796, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1618035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:40:21.855042 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089792, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1598823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855054 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089782, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.156834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855065 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089782, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.156834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855119 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089792, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1598823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855132 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089782, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.156834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855245 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1089789, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1594048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855298 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1089772, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1535563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855322 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1089789, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1594048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855333 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1089789, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1594048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855345 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1089772, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1535563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855362 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089782, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.156834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855372 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1089785, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1578338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855383 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1089789, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1594048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855435 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1089774, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1556566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:40:21.855456 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089782, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.156834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855466 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1089789, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1594048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855477 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1089785, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1578338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855492 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089780, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1558337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855502 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1089785, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1578338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855513 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1089785, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1578338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855560 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1089785, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1578338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855595 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1089789, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1594048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855606 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089795, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1616302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855616 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089780, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1558337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855631 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1089785, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1578338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855642 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089770, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1527338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855652 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089795, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1616302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855698 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089780, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1558337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855710 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089807, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1666753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855721 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089780, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1558337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855731 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089770, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1527338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855749 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1089794, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1614356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855760 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089795, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1616302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855770 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089780, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1558337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855812 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089780, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1558337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855824 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089773, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1538062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855835 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089795, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1616302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855845 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089807, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1666753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855860 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089770, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1527338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855870 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089795, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1616302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855880 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1089771, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1528337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855925 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089792, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1598823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:40:21.855937 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089770, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1527338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855947 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089795, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1616302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855957 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089807, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1666753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855972 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089807, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1666753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855982 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1089794, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1614356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.855999 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089770, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1527338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856014 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1089788, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1588337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856024 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089773, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1538062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856035 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089786, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1587718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856045 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089770, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1527338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856060 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1089794, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1614356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856070 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1089794, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1614356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856086 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089807, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1666753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856102 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089773, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1538062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856113 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1089772, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1535563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:40:21.856123 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089807, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1666753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856134 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1089771, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1528337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856148 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089804, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.165834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856159 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1089771, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1528337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856197 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:21.856207 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089773, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1538062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856223 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1089794, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1614356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856233 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1089788, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1588337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856243 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1089794, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1614356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856254 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089786, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1587718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856268 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1089788, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1588337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856285 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1089771, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1528337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856295 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089773, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1538062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856310 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089782, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.156834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:40:21.856321 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089804, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.165834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856331 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:40:21.856341 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089786, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1587718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856351 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089773, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1538062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856365 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1089788, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1588337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856383 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089804, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.165834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856393 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:40:21.856403 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1089771, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1528337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856417 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1089771, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1528337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856428 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089786, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1587718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856438 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089804, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.165834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856448 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:40:21.856458 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1089789, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1594048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:40:21.856480 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1089788, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1588337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856490 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1089788, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1588337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856500 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089786, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1587718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856514 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089786, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1587718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856525 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089804, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.165834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856535 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:21.856545 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089804, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.165834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-10-09 10:40:21.856554 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:21.856564 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1089785, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1578338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:40:21.856585 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089780, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1558337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:40:21.856595 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089795, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1616302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:40:21.856605 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089770, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1527338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:40:21.856620 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089807, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1666753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:40:21.856630 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1089794, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1614356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:40:21.856640 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089773, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1538062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:40:21.856650 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1089771, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1528337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:40:21.856670 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1089788, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1588337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:40:21.856681 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089786, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1587718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:40:21.856691 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089804, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.165834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-10-09 10:40:21.856701 | orchestrator | 2025-10-09 10:40:21.856710 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-10-09 10:40:21.856721 | orchestrator | Thursday 09 October 2025 10:37:48 +0000 (0:00:29.455) 0:00:57.133 ****** 2025-10-09 10:40:21.856730 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:40:21.856740 | orchestrator | 2025-10-09 10:40:21.856754 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-10-09 10:40:21.856765 | orchestrator | Thursday 09 October 2025 10:37:48 +0000 (0:00:00.872) 0:00:58.005 ****** 2025-10-09 10:40:21.856775 | orchestrator | [WARNING]: Skipped 2025-10-09 10:40:21.856785 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:40:21.856795 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-10-09 10:40:21.856804 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:40:21.856814 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-10-09 10:40:21.856825 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:40:21.856835 | orchestrator | [WARNING]: Skipped 2025-10-09 10:40:21.856844 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:40:21.856854 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-10-09 10:40:21.856864 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:40:21.856873 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-10-09 10:40:21.856883 | orchestrator | [WARNING]: Skipped 2025-10-09 10:40:21.856892 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:40:21.856902 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-10-09 10:40:21.856920 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:40:21.856929 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-10-09 10:40:21.856939 | orchestrator | [WARNING]: Skipped 2025-10-09 10:40:21.856949 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:40:21.856958 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-10-09 10:40:21.856968 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:40:21.856977 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-10-09 10:40:21.856987 | orchestrator | [WARNING]: Skipped 2025-10-09 10:40:21.856997 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:40:21.857006 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-10-09 10:40:21.857016 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:40:21.857026 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-10-09 10:40:21.857036 | orchestrator | [WARNING]: Skipped 2025-10-09 10:40:21.857045 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:40:21.857055 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-10-09 10:40:21.857064 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:40:21.857074 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-10-09 10:40:21.857084 | orchestrator | [WARNING]: Skipped 2025-10-09 10:40:21.857093 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:40:21.857103 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-10-09 10:40:21.857112 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-10-09 10:40:21.857122 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-10-09 10:40:21.857132 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-10-09 10:40:21.857141 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:40:21.857151 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-10-09 10:40:21.857174 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-10-09 10:40:21.857188 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-10-09 10:40:21.857198 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-10-09 10:40:21.857208 | orchestrator | 2025-10-09 10:40:21.857218 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-10-09 10:40:21.857228 | orchestrator | Thursday 09 October 2025 10:37:52 +0000 (0:00:03.776) 0:01:01.781 ****** 2025-10-09 10:40:21.857238 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-10-09 10:40:21.857249 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:21.857259 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-10-09 10:40:21.857269 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:21.857279 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-10-09 10:40:21.857289 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:21.857298 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-10-09 10:40:21.857308 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:40:21.857318 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-10-09 10:40:21.857328 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:40:21.857338 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-10-09 10:40:21.857347 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:40:21.857357 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-10-09 10:40:21.857373 | orchestrator | 2025-10-09 10:40:21.857383 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-10-09 10:40:21.857393 | orchestrator | Thursday 09 October 2025 10:38:16 +0000 (0:00:23.925) 0:01:25.707 ****** 2025-10-09 10:40:21.857407 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-10-09 10:40:21.857418 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:21.857428 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-10-09 10:40:21.857437 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:40:21.857447 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-10-09 10:40:21.857457 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:21.857467 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-10-09 10:40:21.857476 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-10-09 10:40:21.857486 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:21.857496 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:40:21.857505 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-10-09 10:40:21.857515 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:40:21.857525 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-10-09 10:40:21.857534 | orchestrator | 2025-10-09 10:40:21.857544 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-10-09 10:40:21.857554 | orchestrator | Thursday 09 October 2025 10:38:20 +0000 (0:00:04.212) 0:01:29.919 ****** 2025-10-09 10:40:21.857564 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-10-09 10:40:21.857574 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-10-09 10:40:21.857584 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:21.857594 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:21.857604 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-10-09 10:40:21.857614 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:40:21.857623 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-10-09 10:40:21.857633 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:21.857643 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-10-09 10:40:21.857653 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:40:21.857662 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-10-09 10:40:21.857672 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:40:21.857682 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-10-09 10:40:21.857691 | orchestrator | 2025-10-09 10:40:21.857701 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-10-09 10:40:21.857711 | orchestrator | Thursday 09 October 2025 10:38:23 +0000 (0:00:03.101) 0:01:33.021 ****** 2025-10-09 10:40:21.857721 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:40:21.857730 | orchestrator | 2025-10-09 10:40:21.857740 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-10-09 10:40:21.857750 | orchestrator | Thursday 09 October 2025 10:38:25 +0000 (0:00:01.624) 0:01:34.645 ****** 2025-10-09 10:40:21.857764 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:40:21.857780 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:21.857790 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:21.857800 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:21.857809 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:40:21.857819 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:40:21.857829 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:40:21.857838 | orchestrator | 2025-10-09 10:40:21.857848 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-10-09 10:40:21.857858 | orchestrator | Thursday 09 October 2025 10:38:26 +0000 (0:00:00.983) 0:01:35.629 ****** 2025-10-09 10:40:21.857868 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:40:21.857878 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:40:21.857887 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:40:21.857897 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:40:21.857907 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:40:21.857916 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:40:21.857926 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:40:21.857935 | orchestrator | 2025-10-09 10:40:21.857945 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-10-09 10:40:21.857955 | orchestrator | Thursday 09 October 2025 10:38:30 +0000 (0:00:04.196) 0:01:39.826 ****** 2025-10-09 10:40:21.857965 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-10-09 10:40:21.857974 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-10-09 10:40:21.857984 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:21.857994 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:21.858003 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-10-09 10:40:21.858042 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:40:21.858054 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-10-09 10:40:21.858064 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:21.858079 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-10-09 10:40:21.858090 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:40:21.858099 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-10-09 10:40:21.858109 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:40:21.858119 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-10-09 10:40:21.858128 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:40:21.858138 | orchestrator | 2025-10-09 10:40:21.858147 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-10-09 10:40:21.858157 | orchestrator | Thursday 09 October 2025 10:38:33 +0000 (0:00:03.115) 0:01:42.941 ****** 2025-10-09 10:40:21.858182 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-10-09 10:40:21.858193 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:21.858203 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-10-09 10:40:21.858212 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:21.858222 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-10-09 10:40:21.858232 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:21.858242 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-10-09 10:40:21.858251 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:40:21.858261 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-10-09 10:40:21.858271 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:40:21.858287 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-10-09 10:40:21.858297 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-10-09 10:40:21.858307 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:40:21.858317 | orchestrator | 2025-10-09 10:40:21.858326 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-10-09 10:40:21.858336 | orchestrator | Thursday 09 October 2025 10:38:35 +0000 (0:00:02.006) 0:01:44.948 ****** 2025-10-09 10:40:21.858346 | orchestrator | [WARNING]: Skipped 2025-10-09 10:40:21.858356 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-10-09 10:40:21.858365 | orchestrator | due to this access issue: 2025-10-09 10:40:21.858375 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-10-09 10:40:21.858385 | orchestrator | not a directory 2025-10-09 10:40:21.858395 | orchestrator | ok: [testbed-manager -> localhost] 2025-10-09 10:40:21.858404 | orchestrator | 2025-10-09 10:40:21.858414 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-10-09 10:40:21.858424 | orchestrator | Thursday 09 October 2025 10:38:37 +0000 (0:00:01.416) 0:01:46.365 ****** 2025-10-09 10:40:21.858434 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:40:21.858444 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:21.858453 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:21.858463 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:21.858473 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:40:21.858482 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:40:21.858492 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:40:21.858502 | orchestrator | 2025-10-09 10:40:21.858512 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-10-09 10:40:21.858526 | orchestrator | Thursday 09 October 2025 10:38:38 +0000 (0:00:00.848) 0:01:47.214 ****** 2025-10-09 10:40:21.858537 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:40:21.858546 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:40:21.858556 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:40:21.858566 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:40:21.858576 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:40:21.858586 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:40:21.858595 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:40:21.858605 | orchestrator | 2025-10-09 10:40:21.858615 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-10-09 10:40:21.858625 | orchestrator | Thursday 09 October 2025 10:38:39 +0000 (0:00:01.518) 0:01:48.732 ****** 2025-10-09 10:40:21.858635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.858651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.858662 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.858678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.858689 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-10-09 10:40:21.858700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.858715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.858726 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.858736 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.858751 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.858768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.858778 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-10-09 10:40:21.858788 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.858798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.858816 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.858827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.858837 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.858851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.858868 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.858879 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-10-09 10:40:21.858890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.858905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.858916 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.858926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.858946 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.858956 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-10-09 10:40:21.858967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.858977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.858987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-10-09 10:40:21.858998 | orchestrator | 2025-10-09 10:40:21.859008 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-10-09 10:40:21.859018 | orchestrator | Thursday 09 October 2025 10:38:44 +0000 (0:00:04.602) 0:01:53.335 ****** 2025-10-09 10:40:21.859028 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-10-09 10:40:21.859038 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:40:21.859048 | orchestrator | 2025-10-09 10:40:21.859062 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-10-09 10:40:21.859072 | orchestrator | Thursday 09 October 2025 10:38:45 +0000 (0:00:01.135) 0:01:54.470 ****** 2025-10-09 10:40:21.859082 | orchestrator | 2025-10-09 10:40:21.859091 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-10-09 10:40:21.859101 | orchestrator | Thursday 09 October 2025 10:38:45 +0000 (0:00:00.067) 0:01:54.537 ****** 2025-10-09 10:40:21.859111 | orchestrator | 2025-10-09 10:40:21.859121 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-10-09 10:40:21.859130 | orchestrator | Thursday 09 October 2025 10:38:45 +0000 (0:00:00.070) 0:01:54.608 ****** 2025-10-09 10:40:21.859145 | orchestrator | 2025-10-09 10:40:21.859156 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-10-09 10:40:21.859215 | orchestrator | Thursday 09 October 2025 10:38:45 +0000 (0:00:00.127) 0:01:54.736 ****** 2025-10-09 10:40:21.859225 | orchestrator | 2025-10-09 10:40:21.859235 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-10-09 10:40:21.859245 | orchestrator | Thursday 09 October 2025 10:38:45 +0000 (0:00:00.332) 0:01:55.068 ****** 2025-10-09 10:40:21.859255 | orchestrator | 2025-10-09 10:40:21.859265 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-10-09 10:40:21.859275 | orchestrator | Thursday 09 October 2025 10:38:46 +0000 (0:00:00.060) 0:01:55.129 ****** 2025-10-09 10:40:21.859285 | orchestrator | 2025-10-09 10:40:21.859294 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-10-09 10:40:21.859304 | orchestrator | Thursday 09 October 2025 10:38:46 +0000 (0:00:00.069) 0:01:55.198 ****** 2025-10-09 10:40:21.859314 | orchestrator | 2025-10-09 10:40:21.859324 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-10-09 10:40:21.859334 | orchestrator | Thursday 09 October 2025 10:38:46 +0000 (0:00:00.080) 0:01:55.279 ****** 2025-10-09 10:40:21.859344 | orchestrator | changed: [testbed-manager] 2025-10-09 10:40:21.859353 | orchestrator | 2025-10-09 10:40:21.859363 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-10-09 10:40:21.859378 | orchestrator | Thursday 09 October 2025 10:39:05 +0000 (0:00:19.824) 0:02:15.104 ****** 2025-10-09 10:40:21.859389 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:40:21.859398 | orchestrator | changed: [testbed-manager] 2025-10-09 10:40:21.859408 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:40:21.859418 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:40:21.859428 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:40:21.859438 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:40:21.859448 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:40:21.859458 | orchestrator | 2025-10-09 10:40:21.859467 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-10-09 10:40:21.859477 | orchestrator | Thursday 09 October 2025 10:39:20 +0000 (0:00:14.347) 0:02:29.451 ****** 2025-10-09 10:40:21.859487 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:40:21.859497 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:40:21.859507 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:40:21.859516 | orchestrator | 2025-10-09 10:40:21.859526 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-10-09 10:40:21.859536 | orchestrator | Thursday 09 October 2025 10:39:25 +0000 (0:00:05.369) 0:02:34.820 ****** 2025-10-09 10:40:21.859546 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:40:21.859556 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:40:21.859566 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:40:21.859575 | orchestrator | 2025-10-09 10:40:21.859585 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-10-09 10:40:21.859595 | orchestrator | Thursday 09 October 2025 10:39:31 +0000 (0:00:05.325) 0:02:40.146 ****** 2025-10-09 10:40:21.859605 | orchestrator | changed: [testbed-manager] 2025-10-09 10:40:21.859615 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:40:21.859624 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:40:21.859634 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:40:21.859644 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:40:21.859652 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:40:21.859660 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:40:21.859669 | orchestrator | 2025-10-09 10:40:21.859677 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-10-09 10:40:21.859685 | orchestrator | Thursday 09 October 2025 10:39:48 +0000 (0:00:17.066) 0:02:57.213 ****** 2025-10-09 10:40:21.859693 | orchestrator | changed: [testbed-manager] 2025-10-09 10:40:21.859701 | orchestrator | 2025-10-09 10:40:21.859709 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-10-09 10:40:21.859723 | orchestrator | Thursday 09 October 2025 10:39:55 +0000 (0:00:07.434) 0:03:04.647 ****** 2025-10-09 10:40:21.859731 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:40:21.859739 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:40:21.859747 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:40:21.859755 | orchestrator | 2025-10-09 10:40:21.859763 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-10-09 10:40:21.859771 | orchestrator | Thursday 09 October 2025 10:40:05 +0000 (0:00:10.001) 0:03:14.649 ****** 2025-10-09 10:40:21.859779 | orchestrator | changed: [testbed-manager] 2025-10-09 10:40:21.859787 | orchestrator | 2025-10-09 10:40:21.859795 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-10-09 10:40:21.859803 | orchestrator | Thursday 09 October 2025 10:40:11 +0000 (0:00:05.792) 0:03:20.442 ****** 2025-10-09 10:40:21.859811 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:40:21.859819 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:40:21.859827 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:40:21.859835 | orchestrator | 2025-10-09 10:40:21.859843 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:40:21.859852 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-10-09 10:40:21.859861 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-10-09 10:40:21.859873 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-10-09 10:40:21.859882 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-10-09 10:40:21.859890 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-10-09 10:40:21.859898 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-10-09 10:40:21.859906 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-10-09 10:40:21.859914 | orchestrator | 2025-10-09 10:40:21.859922 | orchestrator | 2025-10-09 10:40:21.859930 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:40:21.859938 | orchestrator | Thursday 09 October 2025 10:40:18 +0000 (0:00:06.974) 0:03:27.417 ****** 2025-10-09 10:40:21.859946 | orchestrator | =============================================================================== 2025-10-09 10:40:21.859954 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 29.46s 2025-10-09 10:40:21.859962 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 23.93s 2025-10-09 10:40:21.859970 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 19.82s 2025-10-09 10:40:21.859978 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 17.07s 2025-10-09 10:40:21.859986 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.35s 2025-10-09 10:40:21.859998 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.00s 2025-10-09 10:40:21.860007 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.43s 2025-10-09 10:40:21.860015 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.97s 2025-10-09 10:40:21.860023 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.28s 2025-10-09 10:40:21.860031 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.23s 2025-10-09 10:40:21.860039 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.79s 2025-10-09 10:40:21.860052 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.37s 2025-10-09 10:40:21.860060 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.33s 2025-10-09 10:40:21.860068 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.60s 2025-10-09 10:40:21.860076 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.21s 2025-10-09 10:40:21.860084 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 4.20s 2025-10-09 10:40:21.860092 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.07s 2025-10-09 10:40:21.860100 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 3.78s 2025-10-09 10:40:21.860108 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 3.12s 2025-10-09 10:40:21.860116 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.10s 2025-10-09 10:40:21.860124 | orchestrator | 2025-10-09 10:40:21 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:24.895445 | orchestrator | 2025-10-09 10:40:24 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:40:24.899229 | orchestrator | 2025-10-09 10:40:24 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:40:24.901232 | orchestrator | 2025-10-09 10:40:24 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:40:24.903363 | orchestrator | 2025-10-09 10:40:24 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:40:24.903397 | orchestrator | 2025-10-09 10:40:24 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:27.958308 | orchestrator | 2025-10-09 10:40:27 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:40:27.960836 | orchestrator | 2025-10-09 10:40:27 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:40:27.962638 | orchestrator | 2025-10-09 10:40:27 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:40:27.964748 | orchestrator | 2025-10-09 10:40:27 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:40:27.964771 | orchestrator | 2025-10-09 10:40:27 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:31.011996 | orchestrator | 2025-10-09 10:40:31 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:40:31.013983 | orchestrator | 2025-10-09 10:40:31 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:40:31.016950 | orchestrator | 2025-10-09 10:40:31 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:40:31.018832 | orchestrator | 2025-10-09 10:40:31 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:40:31.018863 | orchestrator | 2025-10-09 10:40:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:34.073042 | orchestrator | 2025-10-09 10:40:34 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:40:34.074322 | orchestrator | 2025-10-09 10:40:34 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:40:34.075320 | orchestrator | 2025-10-09 10:40:34 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:40:34.077543 | orchestrator | 2025-10-09 10:40:34 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:40:34.077610 | orchestrator | 2025-10-09 10:40:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:37.126376 | orchestrator | 2025-10-09 10:40:37 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:40:37.126516 | orchestrator | 2025-10-09 10:40:37 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:40:37.127273 | orchestrator | 2025-10-09 10:40:37 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:40:37.129509 | orchestrator | 2025-10-09 10:40:37 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:40:37.129540 | orchestrator | 2025-10-09 10:40:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:40.177676 | orchestrator | 2025-10-09 10:40:40 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:40:40.179312 | orchestrator | 2025-10-09 10:40:40 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:40:40.181382 | orchestrator | 2025-10-09 10:40:40 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:40:40.183871 | orchestrator | 2025-10-09 10:40:40 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:40:40.183899 | orchestrator | 2025-10-09 10:40:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:43.221106 | orchestrator | 2025-10-09 10:40:43 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:40:43.222216 | orchestrator | 2025-10-09 10:40:43 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:40:43.224017 | orchestrator | 2025-10-09 10:40:43 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:40:43.224708 | orchestrator | 2025-10-09 10:40:43 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:40:43.224733 | orchestrator | 2025-10-09 10:40:43 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:46.264382 | orchestrator | 2025-10-09 10:40:46 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:40:46.267946 | orchestrator | 2025-10-09 10:40:46 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:40:46.270418 | orchestrator | 2025-10-09 10:40:46 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:40:46.272358 | orchestrator | 2025-10-09 10:40:46 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:40:46.272380 | orchestrator | 2025-10-09 10:40:46 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:49.315788 | orchestrator | 2025-10-09 10:40:49 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:40:49.317196 | orchestrator | 2025-10-09 10:40:49 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:40:49.319511 | orchestrator | 2025-10-09 10:40:49 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:40:49.322294 | orchestrator | 2025-10-09 10:40:49 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:40:49.322319 | orchestrator | 2025-10-09 10:40:49 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:52.402727 | orchestrator | 2025-10-09 10:40:52 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:40:52.404999 | orchestrator | 2025-10-09 10:40:52 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:40:52.406903 | orchestrator | 2025-10-09 10:40:52 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:40:52.408723 | orchestrator | 2025-10-09 10:40:52 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:40:52.408837 | orchestrator | 2025-10-09 10:40:52 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:55.459585 | orchestrator | 2025-10-09 10:40:55 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:40:55.462677 | orchestrator | 2025-10-09 10:40:55 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:40:55.463142 | orchestrator | 2025-10-09 10:40:55 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:40:55.464117 | orchestrator | 2025-10-09 10:40:55 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:40:55.464140 | orchestrator | 2025-10-09 10:40:55 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:40:58.516424 | orchestrator | 2025-10-09 10:40:58 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:40:58.518625 | orchestrator | 2025-10-09 10:40:58 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:40:58.519578 | orchestrator | 2025-10-09 10:40:58 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:40:58.520613 | orchestrator | 2025-10-09 10:40:58 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:40:58.520634 | orchestrator | 2025-10-09 10:40:58 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:01.568908 | orchestrator | 2025-10-09 10:41:01 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:41:01.570407 | orchestrator | 2025-10-09 10:41:01 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:41:01.571684 | orchestrator | 2025-10-09 10:41:01 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:41:01.573095 | orchestrator | 2025-10-09 10:41:01 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:41:01.573113 | orchestrator | 2025-10-09 10:41:01 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:04.618114 | orchestrator | 2025-10-09 10:41:04 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:41:04.619271 | orchestrator | 2025-10-09 10:41:04 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:41:04.620854 | orchestrator | 2025-10-09 10:41:04 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:41:04.623048 | orchestrator | 2025-10-09 10:41:04 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state STARTED 2025-10-09 10:41:04.625310 | orchestrator | 2025-10-09 10:41:04 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:07.662496 | orchestrator | 2025-10-09 10:41:07 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:41:07.663510 | orchestrator | 2025-10-09 10:41:07 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:41:07.664962 | orchestrator | 2025-10-09 10:41:07 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:41:07.668861 | orchestrator | 2025-10-09 10:41:07 | INFO  | Task 927ae3a3-f9ee-4e6f-9cc3-c07493ec3019 is in state SUCCESS 2025-10-09 10:41:07.670379 | orchestrator | 2025-10-09 10:41:07.670418 | orchestrator | 2025-10-09 10:41:07.670431 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:41:07.670443 | orchestrator | 2025-10-09 10:41:07.670455 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:41:07.670467 | orchestrator | Thursday 09 October 2025 10:37:11 +0000 (0:00:00.351) 0:00:00.351 ****** 2025-10-09 10:41:07.670578 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:41:07.670620 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:41:07.670631 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:41:07.670642 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:41:07.670653 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:41:07.670664 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:41:07.670676 | orchestrator | 2025-10-09 10:41:07.670688 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:41:07.670699 | orchestrator | Thursday 09 October 2025 10:37:12 +0000 (0:00:01.320) 0:00:01.672 ****** 2025-10-09 10:41:07.670710 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-10-09 10:41:07.670722 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-10-09 10:41:07.670733 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-10-09 10:41:07.670744 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-10-09 10:41:07.670755 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-10-09 10:41:07.670766 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-10-09 10:41:07.670776 | orchestrator | 2025-10-09 10:41:07.670801 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-10-09 10:41:07.670813 | orchestrator | 2025-10-09 10:41:07.670824 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-10-09 10:41:07.670835 | orchestrator | Thursday 09 October 2025 10:37:13 +0000 (0:00:00.654) 0:00:02.326 ****** 2025-10-09 10:41:07.670846 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:41:07.670859 | orchestrator | 2025-10-09 10:41:07.670871 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-10-09 10:41:07.670881 | orchestrator | Thursday 09 October 2025 10:37:14 +0000 (0:00:01.209) 0:00:03.536 ****** 2025-10-09 10:41:07.670893 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-10-09 10:41:07.671372 | orchestrator | 2025-10-09 10:41:07.671387 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-10-09 10:41:07.671398 | orchestrator | Thursday 09 October 2025 10:37:18 +0000 (0:00:03.437) 0:00:06.974 ****** 2025-10-09 10:41:07.671409 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-10-09 10:41:07.671421 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-10-09 10:41:07.671432 | orchestrator | 2025-10-09 10:41:07.671443 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-10-09 10:41:07.671454 | orchestrator | Thursday 09 October 2025 10:37:24 +0000 (0:00:06.596) 0:00:13.570 ****** 2025-10-09 10:41:07.671733 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-09 10:41:07.671746 | orchestrator | 2025-10-09 10:41:07.671757 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-10-09 10:41:07.671768 | orchestrator | Thursday 09 October 2025 10:37:28 +0000 (0:00:03.644) 0:00:17.214 ****** 2025-10-09 10:41:07.671779 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-09 10:41:07.671790 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-10-09 10:41:07.671802 | orchestrator | 2025-10-09 10:41:07.671813 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-10-09 10:41:07.671823 | orchestrator | Thursday 09 October 2025 10:37:32 +0000 (0:00:04.027) 0:00:21.242 ****** 2025-10-09 10:41:07.671834 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-09 10:41:07.671845 | orchestrator | 2025-10-09 10:41:07.671856 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-10-09 10:41:07.671867 | orchestrator | Thursday 09 October 2025 10:37:36 +0000 (0:00:03.792) 0:00:25.034 ****** 2025-10-09 10:41:07.671878 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-10-09 10:41:07.671889 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-10-09 10:41:07.671912 | orchestrator | 2025-10-09 10:41:07.671923 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-10-09 10:41:07.671934 | orchestrator | Thursday 09 October 2025 10:37:43 +0000 (0:00:07.195) 0:00:32.230 ****** 2025-10-09 10:41:07.671949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:07.672005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:07.672028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:07.672041 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.672053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.672074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.672117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.672131 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.672149 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.672186 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.672207 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.672220 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.672232 | orchestrator | 2025-10-09 10:41:07.672275 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-10-09 10:41:07.672289 | orchestrator | Thursday 09 October 2025 10:37:46 +0000 (0:00:03.200) 0:00:35.431 ****** 2025-10-09 10:41:07.672300 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:41:07.672311 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:41:07.672321 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:41:07.672332 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:41:07.672343 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:41:07.672354 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:41:07.672366 | orchestrator | 2025-10-09 10:41:07.672380 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-10-09 10:41:07.672392 | orchestrator | Thursday 09 October 2025 10:37:47 +0000 (0:00:00.775) 0:00:36.206 ****** 2025-10-09 10:41:07.672405 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:41:07.672417 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:41:07.672430 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:41:07.672442 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:41:07.672455 | orchestrator | 2025-10-09 10:41:07.672467 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-10-09 10:41:07.672479 | orchestrator | Thursday 09 October 2025 10:37:48 +0000 (0:00:01.486) 0:00:37.693 ****** 2025-10-09 10:41:07.672492 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-10-09 10:41:07.672509 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-10-09 10:41:07.672523 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-10-09 10:41:07.672535 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-10-09 10:41:07.672548 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-10-09 10:41:07.672560 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-10-09 10:41:07.672573 | orchestrator | 2025-10-09 10:41:07.672586 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-10-09 10:41:07.672598 | orchestrator | Thursday 09 October 2025 10:37:51 +0000 (0:00:02.714) 0:00:40.407 ****** 2025-10-09 10:41:07.672612 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-10-09 10:41:07.672634 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-10-09 10:41:07.672648 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-10-09 10:41:07.672690 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-10-09 10:41:07.672710 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-10-09 10:41:07.672724 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-10-09 10:41:07.672745 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-10-09 10:41:07.672757 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-10-09 10:41:07.672798 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-10-09 10:41:07.672816 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-10-09 10:41:07.672837 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-10-09 10:41:07.672849 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-10-09 10:41:07.672860 | orchestrator | 2025-10-09 10:41:07.672871 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-10-09 10:41:07.672882 | orchestrator | Thursday 09 October 2025 10:37:56 +0000 (0:00:05.280) 0:00:45.688 ****** 2025-10-09 10:41:07.672893 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-10-09 10:41:07.672905 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-10-09 10:41:07.672916 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-10-09 10:41:07.672927 | orchestrator | 2025-10-09 10:41:07.672938 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-10-09 10:41:07.672949 | orchestrator | Thursday 09 October 2025 10:37:59 +0000 (0:00:02.458) 0:00:48.146 ****** 2025-10-09 10:41:07.672959 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-10-09 10:41:07.672970 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-10-09 10:41:07.672981 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-10-09 10:41:07.672991 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-10-09 10:41:07.673002 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-10-09 10:41:07.673041 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-10-09 10:41:07.673054 | orchestrator | 2025-10-09 10:41:07.673065 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-10-09 10:41:07.673076 | orchestrator | Thursday 09 October 2025 10:38:02 +0000 (0:00:03.090) 0:00:51.237 ****** 2025-10-09 10:41:07.673087 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-10-09 10:41:07.673098 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-10-09 10:41:07.673108 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-10-09 10:41:07.673119 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-10-09 10:41:07.673130 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-10-09 10:41:07.673140 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-10-09 10:41:07.673151 | orchestrator | 2025-10-09 10:41:07.673268 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-10-09 10:41:07.673288 | orchestrator | Thursday 09 October 2025 10:38:03 +0000 (0:00:01.335) 0:00:52.572 ****** 2025-10-09 10:41:07.673299 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:41:07.673310 | orchestrator | 2025-10-09 10:41:07.673321 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-10-09 10:41:07.673332 | orchestrator | Thursday 09 October 2025 10:38:03 +0000 (0:00:00.131) 0:00:52.704 ****** 2025-10-09 10:41:07.673343 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:41:07.673353 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:41:07.673375 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:41:07.673386 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:41:07.673396 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:41:07.673407 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:41:07.673418 | orchestrator | 2025-10-09 10:41:07.673429 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-10-09 10:41:07.673440 | orchestrator | Thursday 09 October 2025 10:38:04 +0000 (0:00:01.048) 0:00:53.752 ****** 2025-10-09 10:41:07.673452 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:41:07.673464 | orchestrator | 2025-10-09 10:41:07.673475 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-10-09 10:41:07.673486 | orchestrator | Thursday 09 October 2025 10:38:06 +0000 (0:00:01.794) 0:00:55.546 ****** 2025-10-09 10:41:07.673498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:07.673511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:07.673559 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.673582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:07.673593 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.673603 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.673613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.673623 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.673667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.673684 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.673695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.673705 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.673715 | orchestrator | 2025-10-09 10:41:07.673725 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-10-09 10:41:07.673734 | orchestrator | Thursday 09 October 2025 10:38:09 +0000 (0:00:03.204) 0:00:58.751 ****** 2025-10-09 10:41:07.673745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:41:07.673760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.673777 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:41:07.673787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:41:07.673802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.673812 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:41:07.673822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:41:07.673833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.673842 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:41:07.673852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.673877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.673887 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:41:07.673902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.673913 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.673923 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:41:07.673933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.673943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.673960 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:41:07.673970 | orchestrator | 2025-10-09 10:41:07.673980 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-10-09 10:41:07.673990 | orchestrator | Thursday 09 October 2025 10:38:12 +0000 (0:00:02.783) 0:01:01.534 ****** 2025-10-09 10:41:07.674005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:41:07.674048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.674062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:41:07.674072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.674082 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:41:07.674092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:41:07.674118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.674129 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:41:07.674138 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:41:07.674153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.674181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.674191 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:41:07.674201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.674211 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.674228 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:41:07.674244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.674255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.674265 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:41:07.674275 | orchestrator | 2025-10-09 10:41:07.674290 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-10-09 10:41:07.674300 | orchestrator | Thursday 09 October 2025 10:38:15 +0000 (0:00:02.829) 0:01:04.364 ****** 2025-10-09 10:41:07.674310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:07.674320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:07.674341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:07.674357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.674372 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.674383 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.674393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.674409 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.674419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.674435 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.674451 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.674461 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.674471 | orchestrator | 2025-10-09 10:41:07.674481 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-10-09 10:41:07.674491 | orchestrator | Thursday 09 October 2025 10:38:19 +0000 (0:00:04.458) 0:01:08.822 ****** 2025-10-09 10:41:07.674506 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-10-09 10:41:07.674516 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:41:07.674526 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-10-09 10:41:07.674536 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-10-09 10:41:07.674545 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:41:07.674555 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-10-09 10:41:07.674565 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-10-09 10:41:07.674574 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:41:07.674584 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-10-09 10:41:07.674594 | orchestrator | 2025-10-09 10:41:07.674604 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-10-09 10:41:07.674613 | orchestrator | Thursday 09 October 2025 10:38:23 +0000 (0:00:03.237) 0:01:12.060 ****** 2025-10-09 10:41:07.674623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:07.674639 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.674654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:07.674665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:07.674685 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.674700 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.674711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.674725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.674736 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.674752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.674762 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.674772 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.674782 | orchestrator | 2025-10-09 10:41:07.674792 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-10-09 10:41:07.674802 | orchestrator | Thursday 09 October 2025 10:38:35 +0000 (0:00:12.634) 0:01:24.694 ****** 2025-10-09 10:41:07.674820 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:41:07.674831 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:41:07.674840 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:41:07.674850 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:41:07.674859 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:41:07.674869 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:41:07.674878 | orchestrator | 2025-10-09 10:41:07.674888 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-10-09 10:41:07.674898 | orchestrator | Thursday 09 October 2025 10:38:37 +0000 (0:00:02.070) 0:01:26.765 ****** 2025-10-09 10:41:07.674908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:41:07.674924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.674934 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:41:07.675036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:41:07.675056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.675067 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:41:07.675084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-10-09 10:41:07.675095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.675105 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:41:07.675126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.675137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.675147 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:41:07.675173 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.675184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.675195 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:41:07.675210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.675231 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-10-09 10:41:07.675242 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:41:07.675252 | orchestrator | 2025-10-09 10:41:07.675261 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-10-09 10:41:07.675271 | orchestrator | Thursday 09 October 2025 10:38:39 +0000 (0:00:01.465) 0:01:28.230 ****** 2025-10-09 10:41:07.675281 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:41:07.675291 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:41:07.675300 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:41:07.675310 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:41:07.675319 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:41:07.675329 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:41:07.675339 | orchestrator | 2025-10-09 10:41:07.675349 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-10-09 10:41:07.675359 | orchestrator | Thursday 09 October 2025 10:38:39 +0000 (0:00:00.662) 0:01:28.893 ****** 2025-10-09 10:41:07.675368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:07.675379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:07.675395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-10-09 10:41:07.675416 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.675427 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.675437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.675447 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.675463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.675479 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.675493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.675504 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.675514 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-10-09 10:41:07.675524 | orchestrator | 2025-10-09 10:41:07.675534 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-10-09 10:41:07.675543 | orchestrator | Thursday 09 October 2025 10:38:43 +0000 (0:00:03.745) 0:01:32.639 ****** 2025-10-09 10:41:07.675553 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:41:07.675563 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:41:07.675572 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:41:07.675582 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:41:07.675592 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:41:07.675601 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:41:07.675611 | orchestrator | 2025-10-09 10:41:07.675620 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-10-09 10:41:07.675630 | orchestrator | Thursday 09 October 2025 10:38:44 +0000 (0:00:00.545) 0:01:33.184 ****** 2025-10-09 10:41:07.675639 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:41:07.675649 | orchestrator | 2025-10-09 10:41:07.675659 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-10-09 10:41:07.675674 | orchestrator | Thursday 09 October 2025 10:38:46 +0000 (0:00:02.590) 0:01:35.774 ****** 2025-10-09 10:41:07.675683 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:41:07.675693 | orchestrator | 2025-10-09 10:41:07.675702 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-10-09 10:41:07.675712 | orchestrator | Thursday 09 October 2025 10:38:49 +0000 (0:00:02.525) 0:01:38.300 ****** 2025-10-09 10:41:07.675722 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:41:07.675731 | orchestrator | 2025-10-09 10:41:07.675741 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-10-09 10:41:07.675750 | orchestrator | Thursday 09 October 2025 10:39:09 +0000 (0:00:20.139) 0:01:58.440 ****** 2025-10-09 10:41:07.675760 | orchestrator | 2025-10-09 10:41:07.675774 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-10-09 10:41:07.675784 | orchestrator | Thursday 09 October 2025 10:39:09 +0000 (0:00:00.188) 0:01:58.628 ****** 2025-10-09 10:41:07.675793 | orchestrator | 2025-10-09 10:41:07.675803 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-10-09 10:41:07.675812 | orchestrator | Thursday 09 October 2025 10:39:09 +0000 (0:00:00.254) 0:01:58.883 ****** 2025-10-09 10:41:07.675822 | orchestrator | 2025-10-09 10:41:07.675832 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-10-09 10:41:07.675841 | orchestrator | Thursday 09 October 2025 10:39:10 +0000 (0:00:00.146) 0:01:59.029 ****** 2025-10-09 10:41:07.675851 | orchestrator | 2025-10-09 10:41:07.675860 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-10-09 10:41:07.675870 | orchestrator | Thursday 09 October 2025 10:39:10 +0000 (0:00:00.169) 0:01:59.199 ****** 2025-10-09 10:41:07.675879 | orchestrator | 2025-10-09 10:41:07.675889 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-10-09 10:41:07.675898 | orchestrator | Thursday 09 October 2025 10:39:10 +0000 (0:00:00.342) 0:01:59.542 ****** 2025-10-09 10:41:07.675908 | orchestrator | 2025-10-09 10:41:07.675917 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-10-09 10:41:07.675927 | orchestrator | Thursday 09 October 2025 10:39:10 +0000 (0:00:00.111) 0:01:59.653 ****** 2025-10-09 10:41:07.675942 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:41:07.675952 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:41:07.675962 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:41:07.675972 | orchestrator | 2025-10-09 10:41:07.675981 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-10-09 10:41:07.675991 | orchestrator | Thursday 09 October 2025 10:39:35 +0000 (0:00:24.774) 0:02:24.428 ****** 2025-10-09 10:41:07.676000 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:41:07.676010 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:41:07.676019 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:41:07.676029 | orchestrator | 2025-10-09 10:41:07.676039 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-10-09 10:41:07.676048 | orchestrator | Thursday 09 October 2025 10:39:43 +0000 (0:00:08.231) 0:02:32.659 ****** 2025-10-09 10:41:07.676058 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:41:07.676067 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:41:07.676077 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:41:07.676086 | orchestrator | 2025-10-09 10:41:07.676096 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-10-09 10:41:07.676106 | orchestrator | Thursday 09 October 2025 10:40:57 +0000 (0:01:13.309) 0:03:45.968 ****** 2025-10-09 10:41:07.676115 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:41:07.676125 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:41:07.676134 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:41:07.676144 | orchestrator | 2025-10-09 10:41:07.676175 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-10-09 10:41:07.676186 | orchestrator | Thursday 09 October 2025 10:41:04 +0000 (0:00:07.876) 0:03:53.845 ****** 2025-10-09 10:41:07.676202 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:41:07.676212 | orchestrator | 2025-10-09 10:41:07.676222 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:41:07.676232 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-10-09 10:41:07.676242 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-10-09 10:41:07.676252 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-10-09 10:41:07.676262 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-10-09 10:41:07.676271 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-10-09 10:41:07.676281 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-10-09 10:41:07.676291 | orchestrator | 2025-10-09 10:41:07.676300 | orchestrator | 2025-10-09 10:41:07.676310 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:41:07.676320 | orchestrator | Thursday 09 October 2025 10:41:06 +0000 (0:00:01.075) 0:03:54.920 ****** 2025-10-09 10:41:07.676330 | orchestrator | =============================================================================== 2025-10-09 10:41:07.676339 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 73.31s 2025-10-09 10:41:07.676349 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 24.77s 2025-10-09 10:41:07.676358 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.14s 2025-10-09 10:41:07.676368 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.63s 2025-10-09 10:41:07.676378 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 8.23s 2025-10-09 10:41:07.676387 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 7.88s 2025-10-09 10:41:07.676397 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.20s 2025-10-09 10:41:07.676407 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.60s 2025-10-09 10:41:07.676421 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.28s 2025-10-09 10:41:07.676431 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.46s 2025-10-09 10:41:07.676441 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.03s 2025-10-09 10:41:07.676451 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.79s 2025-10-09 10:41:07.676460 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.75s 2025-10-09 10:41:07.676470 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.64s 2025-10-09 10:41:07.676480 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.44s 2025-10-09 10:41:07.676489 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 3.24s 2025-10-09 10:41:07.676499 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.20s 2025-10-09 10:41:07.676509 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.20s 2025-10-09 10:41:07.676518 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.09s 2025-10-09 10:41:07.676528 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 2.83s 2025-10-09 10:41:07.676542 | orchestrator | 2025-10-09 10:41:07 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:41:07.676558 | orchestrator | 2025-10-09 10:41:07 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:10.704413 | orchestrator | 2025-10-09 10:41:10 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:41:10.704514 | orchestrator | 2025-10-09 10:41:10 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:41:10.705087 | orchestrator | 2025-10-09 10:41:10 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:41:10.705998 | orchestrator | 2025-10-09 10:41:10 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:41:10.707513 | orchestrator | 2025-10-09 10:41:10 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:13.743607 | orchestrator | 2025-10-09 10:41:13 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:41:13.743882 | orchestrator | 2025-10-09 10:41:13 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:41:13.744552 | orchestrator | 2025-10-09 10:41:13 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:41:13.746295 | orchestrator | 2025-10-09 10:41:13 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:41:13.746340 | orchestrator | 2025-10-09 10:41:13 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:16.779668 | orchestrator | 2025-10-09 10:41:16 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:41:16.780626 | orchestrator | 2025-10-09 10:41:16 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:41:16.781351 | orchestrator | 2025-10-09 10:41:16 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:41:16.781995 | orchestrator | 2025-10-09 10:41:16 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:41:16.782261 | orchestrator | 2025-10-09 10:41:16 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:19.811895 | orchestrator | 2025-10-09 10:41:19 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:41:19.812242 | orchestrator | 2025-10-09 10:41:19 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:41:19.814141 | orchestrator | 2025-10-09 10:41:19 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:41:19.814773 | orchestrator | 2025-10-09 10:41:19 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:41:19.814949 | orchestrator | 2025-10-09 10:41:19 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:22.847433 | orchestrator | 2025-10-09 10:41:22 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:41:22.848045 | orchestrator | 2025-10-09 10:41:22 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:41:22.849572 | orchestrator | 2025-10-09 10:41:22 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:41:22.850244 | orchestrator | 2025-10-09 10:41:22 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:41:22.850349 | orchestrator | 2025-10-09 10:41:22 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:25.888878 | orchestrator | 2025-10-09 10:41:25 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:41:25.889391 | orchestrator | 2025-10-09 10:41:25 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:41:25.891393 | orchestrator | 2025-10-09 10:41:25 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:41:25.891447 | orchestrator | 2025-10-09 10:41:25 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:41:25.891459 | orchestrator | 2025-10-09 10:41:25 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:28.918075 | orchestrator | 2025-10-09 10:41:28 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:41:28.918350 | orchestrator | 2025-10-09 10:41:28 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:41:28.919431 | orchestrator | 2025-10-09 10:41:28 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:41:28.920630 | orchestrator | 2025-10-09 10:41:28 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:41:28.920745 | orchestrator | 2025-10-09 10:41:28 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:31.951769 | orchestrator | 2025-10-09 10:41:31 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:41:31.951966 | orchestrator | 2025-10-09 10:41:31 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:41:31.952886 | orchestrator | 2025-10-09 10:41:31 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:41:31.953755 | orchestrator | 2025-10-09 10:41:31 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:41:31.953776 | orchestrator | 2025-10-09 10:41:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:34.983705 | orchestrator | 2025-10-09 10:41:34 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:41:34.984994 | orchestrator | 2025-10-09 10:41:34 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:41:34.986579 | orchestrator | 2025-10-09 10:41:34 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:41:34.988062 | orchestrator | 2025-10-09 10:41:34 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:41:34.988247 | orchestrator | 2025-10-09 10:41:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:38.017659 | orchestrator | 2025-10-09 10:41:38 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:41:38.017896 | orchestrator | 2025-10-09 10:41:38 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:41:38.019802 | orchestrator | 2025-10-09 10:41:38 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:41:38.021834 | orchestrator | 2025-10-09 10:41:38 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:41:38.021858 | orchestrator | 2025-10-09 10:41:38 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:41.063703 | orchestrator | 2025-10-09 10:41:41 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:41:41.063922 | orchestrator | 2025-10-09 10:41:41 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:41:41.066980 | orchestrator | 2025-10-09 10:41:41 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:41:41.067970 | orchestrator | 2025-10-09 10:41:41 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:41:41.067993 | orchestrator | 2025-10-09 10:41:41 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:44.104405 | orchestrator | 2025-10-09 10:41:44 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:41:44.104669 | orchestrator | 2025-10-09 10:41:44 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:41:44.105602 | orchestrator | 2025-10-09 10:41:44 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:41:44.106811 | orchestrator | 2025-10-09 10:41:44 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:41:44.107209 | orchestrator | 2025-10-09 10:41:44 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:47.145188 | orchestrator | 2025-10-09 10:41:47 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:41:47.146109 | orchestrator | 2025-10-09 10:41:47 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:41:47.147556 | orchestrator | 2025-10-09 10:41:47 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:41:47.148419 | orchestrator | 2025-10-09 10:41:47 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:41:47.148655 | orchestrator | 2025-10-09 10:41:47 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:50.191433 | orchestrator | 2025-10-09 10:41:50 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:41:50.193538 | orchestrator | 2025-10-09 10:41:50 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:41:50.194976 | orchestrator | 2025-10-09 10:41:50 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:41:50.197982 | orchestrator | 2025-10-09 10:41:50 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:41:50.198007 | orchestrator | 2025-10-09 10:41:50 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:53.242763 | orchestrator | 2025-10-09 10:41:53 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:41:53.245722 | orchestrator | 2025-10-09 10:41:53 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:41:53.250971 | orchestrator | 2025-10-09 10:41:53 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:41:53.253045 | orchestrator | 2025-10-09 10:41:53 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:41:53.253555 | orchestrator | 2025-10-09 10:41:53 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:56.289460 | orchestrator | 2025-10-09 10:41:56 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:41:56.289744 | orchestrator | 2025-10-09 10:41:56 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:41:56.290777 | orchestrator | 2025-10-09 10:41:56 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:41:56.292013 | orchestrator | 2025-10-09 10:41:56 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:41:56.292038 | orchestrator | 2025-10-09 10:41:56 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:41:59.325313 | orchestrator | 2025-10-09 10:41:59 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:41:59.326911 | orchestrator | 2025-10-09 10:41:59 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:41:59.331044 | orchestrator | 2025-10-09 10:41:59 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:41:59.332052 | orchestrator | 2025-10-09 10:41:59 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:41:59.332075 | orchestrator | 2025-10-09 10:41:59 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:02.368268 | orchestrator | 2025-10-09 10:42:02 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:42:02.369409 | orchestrator | 2025-10-09 10:42:02 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:42:02.374487 | orchestrator | 2025-10-09 10:42:02 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:42:02.376025 | orchestrator | 2025-10-09 10:42:02 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:42:02.376047 | orchestrator | 2025-10-09 10:42:02 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:05.432510 | orchestrator | 2025-10-09 10:42:05 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:42:05.433416 | orchestrator | 2025-10-09 10:42:05 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:42:05.433843 | orchestrator | 2025-10-09 10:42:05 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:42:05.436564 | orchestrator | 2025-10-09 10:42:05 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:42:05.436586 | orchestrator | 2025-10-09 10:42:05 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:08.462218 | orchestrator | 2025-10-09 10:42:08 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:42:08.463312 | orchestrator | 2025-10-09 10:42:08 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:42:08.463818 | orchestrator | 2025-10-09 10:42:08 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:42:08.464561 | orchestrator | 2025-10-09 10:42:08 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:42:08.464584 | orchestrator | 2025-10-09 10:42:08 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:11.496980 | orchestrator | 2025-10-09 10:42:11 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:42:11.497385 | orchestrator | 2025-10-09 10:42:11 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:42:11.498331 | orchestrator | 2025-10-09 10:42:11 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:42:11.499366 | orchestrator | 2025-10-09 10:42:11 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:42:11.499415 | orchestrator | 2025-10-09 10:42:11 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:14.539771 | orchestrator | 2025-10-09 10:42:14 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:42:14.539862 | orchestrator | 2025-10-09 10:42:14 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:42:14.539875 | orchestrator | 2025-10-09 10:42:14 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:42:14.539885 | orchestrator | 2025-10-09 10:42:14 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:42:14.539895 | orchestrator | 2025-10-09 10:42:14 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:17.563996 | orchestrator | 2025-10-09 10:42:17 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:42:17.564259 | orchestrator | 2025-10-09 10:42:17 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:42:17.564956 | orchestrator | 2025-10-09 10:42:17 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:42:17.566214 | orchestrator | 2025-10-09 10:42:17 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:42:17.566323 | orchestrator | 2025-10-09 10:42:17 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:20.602276 | orchestrator | 2025-10-09 10:42:20 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:42:20.603229 | orchestrator | 2025-10-09 10:42:20 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:42:20.604321 | orchestrator | 2025-10-09 10:42:20 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:42:20.605305 | orchestrator | 2025-10-09 10:42:20 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:42:20.605413 | orchestrator | 2025-10-09 10:42:20 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:23.636247 | orchestrator | 2025-10-09 10:42:23 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:42:23.637171 | orchestrator | 2025-10-09 10:42:23 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:42:23.639376 | orchestrator | 2025-10-09 10:42:23 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:42:23.640561 | orchestrator | 2025-10-09 10:42:23 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:42:23.640586 | orchestrator | 2025-10-09 10:42:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:26.668068 | orchestrator | 2025-10-09 10:42:26 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:42:26.668871 | orchestrator | 2025-10-09 10:42:26 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:42:26.670219 | orchestrator | 2025-10-09 10:42:26 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:42:26.670990 | orchestrator | 2025-10-09 10:42:26 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:42:26.671225 | orchestrator | 2025-10-09 10:42:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:29.695749 | orchestrator | 2025-10-09 10:42:29 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state STARTED 2025-10-09 10:42:29.696029 | orchestrator | 2025-10-09 10:42:29 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:42:29.696324 | orchestrator | 2025-10-09 10:42:29 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:42:29.696954 | orchestrator | 2025-10-09 10:42:29 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:42:29.696976 | orchestrator | 2025-10-09 10:42:29 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:32.719625 | orchestrator | 2025-10-09 10:42:32 | INFO  | Task f0a2f57c-caf7-47fc-a053-e776e608529b is in state SUCCESS 2025-10-09 10:42:32.720769 | orchestrator | 2025-10-09 10:42:32.720804 | orchestrator | 2025-10-09 10:42:32.720817 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:42:32.720829 | orchestrator | 2025-10-09 10:42:32.720840 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:42:32.720852 | orchestrator | Thursday 09 October 2025 10:40:23 +0000 (0:00:00.267) 0:00:00.267 ****** 2025-10-09 10:42:32.720864 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:42:32.720876 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:42:32.720887 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:42:32.720898 | orchestrator | 2025-10-09 10:42:32.720909 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:42:32.720950 | orchestrator | Thursday 09 October 2025 10:40:24 +0000 (0:00:00.332) 0:00:00.600 ****** 2025-10-09 10:42:32.720962 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-10-09 10:42:32.720988 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-10-09 10:42:32.721000 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-10-09 10:42:32.721011 | orchestrator | 2025-10-09 10:42:32.721022 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-10-09 10:42:32.721033 | orchestrator | 2025-10-09 10:42:32.721044 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-10-09 10:42:32.721055 | orchestrator | Thursday 09 October 2025 10:40:24 +0000 (0:00:00.434) 0:00:01.035 ****** 2025-10-09 10:42:32.721066 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:42:32.721078 | orchestrator | 2025-10-09 10:42:32.721089 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-10-09 10:42:32.721100 | orchestrator | Thursday 09 October 2025 10:40:25 +0000 (0:00:00.597) 0:00:01.632 ****** 2025-10-09 10:42:32.721111 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-10-09 10:42:32.721122 | orchestrator | 2025-10-09 10:42:32.721133 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-10-09 10:42:32.721186 | orchestrator | Thursday 09 October 2025 10:40:28 +0000 (0:00:03.729) 0:00:05.361 ****** 2025-10-09 10:42:32.721200 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-10-09 10:42:32.721211 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-10-09 10:42:32.721222 | orchestrator | 2025-10-09 10:42:32.721233 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-10-09 10:42:32.721708 | orchestrator | Thursday 09 October 2025 10:40:35 +0000 (0:00:06.832) 0:00:12.194 ****** 2025-10-09 10:42:32.721726 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-09 10:42:32.721737 | orchestrator | 2025-10-09 10:42:32.721748 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-10-09 10:42:32.721759 | orchestrator | Thursday 09 October 2025 10:40:38 +0000 (0:00:03.260) 0:00:15.454 ****** 2025-10-09 10:42:32.721770 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-09 10:42:32.721780 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-10-09 10:42:32.721791 | orchestrator | 2025-10-09 10:42:32.721802 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-10-09 10:42:32.721813 | orchestrator | Thursday 09 October 2025 10:40:42 +0000 (0:00:03.903) 0:00:19.358 ****** 2025-10-09 10:42:32.721824 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-09 10:42:32.721835 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-10-09 10:42:32.721846 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-10-09 10:42:32.721857 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-10-09 10:42:32.721868 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-10-09 10:42:32.721879 | orchestrator | 2025-10-09 10:42:32.721890 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-10-09 10:42:32.721901 | orchestrator | Thursday 09 October 2025 10:41:00 +0000 (0:00:17.674) 0:00:37.032 ****** 2025-10-09 10:42:32.721911 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-10-09 10:42:32.721922 | orchestrator | 2025-10-09 10:42:32.721933 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-10-09 10:42:32.721944 | orchestrator | Thursday 09 October 2025 10:41:04 +0000 (0:00:04.312) 0:00:41.344 ****** 2025-10-09 10:42:32.721960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:42:32.722010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:42:32.722083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:42:32.722097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.722111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.722123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.722178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.722199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.722211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.722223 | orchestrator | 2025-10-09 10:42:32.722234 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-10-09 10:42:32.722246 | orchestrator | Thursday 09 October 2025 10:41:06 +0000 (0:00:02.094) 0:00:43.438 ****** 2025-10-09 10:42:32.722257 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-10-09 10:42:32.722268 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-10-09 10:42:32.722279 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-10-09 10:42:32.722289 | orchestrator | 2025-10-09 10:42:32.722301 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-10-09 10:42:32.722311 | orchestrator | Thursday 09 October 2025 10:41:07 +0000 (0:00:01.028) 0:00:44.467 ****** 2025-10-09 10:42:32.722322 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:42:32.722333 | orchestrator | 2025-10-09 10:42:32.722344 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-10-09 10:42:32.722355 | orchestrator | Thursday 09 October 2025 10:41:08 +0000 (0:00:00.368) 0:00:44.836 ****** 2025-10-09 10:42:32.722366 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:42:32.722377 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:42:32.722388 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:42:32.722399 | orchestrator | 2025-10-09 10:42:32.722410 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-10-09 10:42:32.722421 | orchestrator | Thursday 09 October 2025 10:41:09 +0000 (0:00:01.387) 0:00:46.223 ****** 2025-10-09 10:42:32.722432 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:42:32.722450 | orchestrator | 2025-10-09 10:42:32.722461 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-10-09 10:42:32.722472 | orchestrator | Thursday 09 October 2025 10:41:10 +0000 (0:00:00.850) 0:00:47.074 ****** 2025-10-09 10:42:32.722484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:42:32.722504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:42:32.722521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:42:32.722534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.722545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.722564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.722576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.722594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.722611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.722622 | orchestrator | 2025-10-09 10:42:32.722634 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-10-09 10:42:32.722645 | orchestrator | Thursday 09 October 2025 10:41:14 +0000 (0:00:04.273) 0:00:51.347 ****** 2025-10-09 10:42:32.722656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:42:32.722675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:42:32.722688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:42:32.722699 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:42:32.722717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:42:32.722734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:42:32.722745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:42:32.722757 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:42:32.722768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:42:32.722786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:42:32.722798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:42:32.722809 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:42:32.722820 | orchestrator | 2025-10-09 10:42:32.722832 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-10-09 10:42:32.722842 | orchestrator | Thursday 09 October 2025 10:41:16 +0000 (0:00:01.323) 0:00:52.671 ****** 2025-10-09 10:42:32.722861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:42:32.722878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:42:32.722890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:42:32.722913 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:42:32.722925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:42:32.722937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:42:32.722948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:42:32.722960 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:42:32.722984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:42:32.722996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:42:32.723015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:42:32.723026 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:42:32.723037 | orchestrator | 2025-10-09 10:42:32.723048 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-10-09 10:42:32.723059 | orchestrator | Thursday 09 October 2025 10:41:17 +0000 (0:00:01.683) 0:00:54.355 ****** 2025-10-09 10:42:32.723070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:42:32.723088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:42:32.723104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:42:32.723123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.723135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.723165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.723177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.723194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.723210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.723222 | orchestrator | 2025-10-09 10:42:32.723233 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-10-09 10:42:32.723244 | orchestrator | Thursday 09 October 2025 10:41:21 +0000 (0:00:03.364) 0:00:57.719 ****** 2025-10-09 10:42:32.723263 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:42:32.723274 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:42:32.723285 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:42:32.723296 | orchestrator | 2025-10-09 10:42:32.723306 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-10-09 10:42:32.723318 | orchestrator | Thursday 09 October 2025 10:41:23 +0000 (0:00:02.572) 0:01:00.292 ****** 2025-10-09 10:42:32.723328 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:42:32.723339 | orchestrator | 2025-10-09 10:42:32.723350 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-10-09 10:42:32.723361 | orchestrator | Thursday 09 October 2025 10:41:25 +0000 (0:00:01.505) 0:01:01.797 ****** 2025-10-09 10:42:32.723372 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:42:32.723383 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:42:32.723394 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:42:32.723404 | orchestrator | 2025-10-09 10:42:32.723415 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-10-09 10:42:32.723426 | orchestrator | Thursday 09 October 2025 10:41:26 +0000 (0:00:01.644) 0:01:03.442 ****** 2025-10-09 10:42:32.723437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:42:32.723449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:42:32.723466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:42:32.723489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.723501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.723513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.723524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.723535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.723547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.723558 | orchestrator | 2025-10-09 10:42:32.723569 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-10-09 10:42:32.723580 | orchestrator | Thursday 09 October 2025 10:41:36 +0000 (0:00:09.893) 0:01:13.335 ****** 2025-10-09 10:42:32.723609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:42:32.723621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:42:32.723632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:42:32.723644 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:42:32.723656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:42:32.723667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:42:32.723684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:42:32.723702 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:42:32.723719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-10-09 10:42:32.723731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-10-09 10:42:32.723742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:42:32.723754 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:42:32.723765 | orchestrator | 2025-10-09 10:42:32.723776 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-10-09 10:42:32.723787 | orchestrator | Thursday 09 October 2025 10:41:37 +0000 (0:00:00.549) 0:01:13.884 ****** 2025-10-09 10:42:32.723799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:42:32.723822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:42:32.723839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-10-09 10:42:32.723851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.723863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.723874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.723886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.723918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.723935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:42:32.723947 | orchestrator | 2025-10-09 10:42:32.723958 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-10-09 10:42:32.723970 | orchestrator | Thursday 09 October 2025 10:41:41 +0000 (0:00:03.802) 0:01:17.686 ****** 2025-10-09 10:42:32.723981 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:42:32.723992 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:42:32.724003 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:42:32.724014 | orchestrator | 2025-10-09 10:42:32.724025 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-10-09 10:42:32.724036 | orchestrator | Thursday 09 October 2025 10:41:42 +0000 (0:00:00.818) 0:01:18.505 ****** 2025-10-09 10:42:32.724047 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:42:32.724058 | orchestrator | 2025-10-09 10:42:32.724069 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-10-09 10:42:32.724080 | orchestrator | Thursday 09 October 2025 10:41:44 +0000 (0:00:02.397) 0:01:20.902 ****** 2025-10-09 10:42:32.724092 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:42:32.724102 | orchestrator | 2025-10-09 10:42:32.724113 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-10-09 10:42:32.724124 | orchestrator | Thursday 09 October 2025 10:41:46 +0000 (0:00:02.470) 0:01:23.373 ****** 2025-10-09 10:42:32.724136 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:42:32.724199 | orchestrator | 2025-10-09 10:42:32.724212 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-10-09 10:42:32.724223 | orchestrator | Thursday 09 October 2025 10:42:00 +0000 (0:00:13.789) 0:01:37.163 ****** 2025-10-09 10:42:32.724234 | orchestrator | 2025-10-09 10:42:32.724245 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-10-09 10:42:32.724256 | orchestrator | Thursday 09 October 2025 10:42:00 +0000 (0:00:00.120) 0:01:37.283 ****** 2025-10-09 10:42:32.724267 | orchestrator | 2025-10-09 10:42:32.724278 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-10-09 10:42:32.724288 | orchestrator | Thursday 09 October 2025 10:42:00 +0000 (0:00:00.140) 0:01:37.423 ****** 2025-10-09 10:42:32.724299 | orchestrator | 2025-10-09 10:42:32.724310 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-10-09 10:42:32.724321 | orchestrator | Thursday 09 October 2025 10:42:01 +0000 (0:00:00.155) 0:01:37.578 ****** 2025-10-09 10:42:32.724339 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:42:32.724349 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:42:32.724359 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:42:32.724369 | orchestrator | 2025-10-09 10:42:32.724378 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-10-09 10:42:32.724388 | orchestrator | Thursday 09 October 2025 10:42:10 +0000 (0:00:09.438) 0:01:47.017 ****** 2025-10-09 10:42:32.724398 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:42:32.724408 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:42:32.724417 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:42:32.724427 | orchestrator | 2025-10-09 10:42:32.724437 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-10-09 10:42:32.724447 | orchestrator | Thursday 09 October 2025 10:42:17 +0000 (0:00:07.453) 0:01:54.471 ****** 2025-10-09 10:42:32.724456 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:42:32.724466 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:42:32.724476 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:42:32.724485 | orchestrator | 2025-10-09 10:42:32.724495 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:42:32.724506 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-10-09 10:42:32.724518 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:42:32.724528 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:42:32.724538 | orchestrator | 2025-10-09 10:42:32.724548 | orchestrator | 2025-10-09 10:42:32.724557 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:42:32.724567 | orchestrator | Thursday 09 October 2025 10:42:31 +0000 (0:00:13.444) 0:02:07.916 ****** 2025-10-09 10:42:32.724577 | orchestrator | =============================================================================== 2025-10-09 10:42:32.724587 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.67s 2025-10-09 10:42:32.724602 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.79s 2025-10-09 10:42:32.724612 | orchestrator | barbican : Restart barbican-worker container --------------------------- 13.44s 2025-10-09 10:42:32.724622 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.89s 2025-10-09 10:42:32.724632 | orchestrator | barbican : Restart barbican-api container ------------------------------- 9.44s 2025-10-09 10:42:32.724641 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 7.45s 2025-10-09 10:42:32.724651 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.83s 2025-10-09 10:42:32.724661 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.31s 2025-10-09 10:42:32.724675 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.27s 2025-10-09 10:42:32.724686 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.90s 2025-10-09 10:42:32.724696 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.80s 2025-10-09 10:42:32.724705 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.73s 2025-10-09 10:42:32.724715 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.36s 2025-10-09 10:42:32.724725 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.26s 2025-10-09 10:42:32.724734 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.57s 2025-10-09 10:42:32.724744 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.47s 2025-10-09 10:42:32.724754 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.40s 2025-10-09 10:42:32.724763 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.09s 2025-10-09 10:42:32.724779 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.68s 2025-10-09 10:42:32.724789 | orchestrator | barbican : Copying over barbican-api-paste.ini -------------------------- 1.64s 2025-10-09 10:42:32.724799 | orchestrator | 2025-10-09 10:42:32 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:42:32.724809 | orchestrator | 2025-10-09 10:42:32 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:42:32.724819 | orchestrator | 2025-10-09 10:42:32 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:42:32.724829 | orchestrator | 2025-10-09 10:42:32 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:35.763251 | orchestrator | 2025-10-09 10:42:35 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:42:35.764219 | orchestrator | 2025-10-09 10:42:35 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:42:35.765211 | orchestrator | 2025-10-09 10:42:35 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:42:35.766109 | orchestrator | 2025-10-09 10:42:35 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:42:35.767318 | orchestrator | 2025-10-09 10:42:35 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:38.800472 | orchestrator | 2025-10-09 10:42:38 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:42:38.800867 | orchestrator | 2025-10-09 10:42:38 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:42:38.801725 | orchestrator | 2025-10-09 10:42:38 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:42:38.802718 | orchestrator | 2025-10-09 10:42:38 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:42:38.802745 | orchestrator | 2025-10-09 10:42:38 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:41.834269 | orchestrator | 2025-10-09 10:42:41 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:42:41.834662 | orchestrator | 2025-10-09 10:42:41 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:42:41.836579 | orchestrator | 2025-10-09 10:42:41 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:42:41.837788 | orchestrator | 2025-10-09 10:42:41 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:42:41.837811 | orchestrator | 2025-10-09 10:42:41 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:44.860627 | orchestrator | 2025-10-09 10:42:44 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:42:44.860915 | orchestrator | 2025-10-09 10:42:44 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:42:44.861581 | orchestrator | 2025-10-09 10:42:44 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:42:44.862395 | orchestrator | 2025-10-09 10:42:44 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:42:44.862421 | orchestrator | 2025-10-09 10:42:44 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:47.888584 | orchestrator | 2025-10-09 10:42:47 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:42:47.888870 | orchestrator | 2025-10-09 10:42:47 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:42:47.890203 | orchestrator | 2025-10-09 10:42:47 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:42:47.890796 | orchestrator | 2025-10-09 10:42:47 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:42:47.890813 | orchestrator | 2025-10-09 10:42:47 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:50.917105 | orchestrator | 2025-10-09 10:42:50 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:42:50.917389 | orchestrator | 2025-10-09 10:42:50 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:42:50.918008 | orchestrator | 2025-10-09 10:42:50 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:42:50.918705 | orchestrator | 2025-10-09 10:42:50 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:42:50.918820 | orchestrator | 2025-10-09 10:42:50 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:53.943565 | orchestrator | 2025-10-09 10:42:53 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:42:53.943673 | orchestrator | 2025-10-09 10:42:53 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:42:53.943689 | orchestrator | 2025-10-09 10:42:53 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:42:53.944505 | orchestrator | 2025-10-09 10:42:53 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:42:53.944532 | orchestrator | 2025-10-09 10:42:53 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:42:56.965763 | orchestrator | 2025-10-09 10:42:56 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:42:56.967166 | orchestrator | 2025-10-09 10:42:56 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:42:56.967599 | orchestrator | 2025-10-09 10:42:56 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:42:56.968205 | orchestrator | 2025-10-09 10:42:56 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:42:56.968229 | orchestrator | 2025-10-09 10:42:56 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:00.010719 | orchestrator | 2025-10-09 10:43:00 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:43:00.011633 | orchestrator | 2025-10-09 10:43:00 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:43:00.012461 | orchestrator | 2025-10-09 10:43:00 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:43:00.013319 | orchestrator | 2025-10-09 10:43:00 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:43:00.013346 | orchestrator | 2025-10-09 10:43:00 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:03.039671 | orchestrator | 2025-10-09 10:43:03 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:43:03.040134 | orchestrator | 2025-10-09 10:43:03 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:43:03.040765 | orchestrator | 2025-10-09 10:43:03 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:43:03.041574 | orchestrator | 2025-10-09 10:43:03 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:43:03.041599 | orchestrator | 2025-10-09 10:43:03 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:06.088892 | orchestrator | 2025-10-09 10:43:06 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:43:06.089264 | orchestrator | 2025-10-09 10:43:06 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:43:06.089910 | orchestrator | 2025-10-09 10:43:06 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:43:06.090644 | orchestrator | 2025-10-09 10:43:06 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:43:06.090670 | orchestrator | 2025-10-09 10:43:06 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:09.169005 | orchestrator | 2025-10-09 10:43:09 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:43:09.172845 | orchestrator | 2025-10-09 10:43:09 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:43:09.173749 | orchestrator | 2025-10-09 10:43:09 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:43:09.174540 | orchestrator | 2025-10-09 10:43:09 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:43:09.174774 | orchestrator | 2025-10-09 10:43:09 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:12.213401 | orchestrator | 2025-10-09 10:43:12 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:43:12.214767 | orchestrator | 2025-10-09 10:43:12 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:43:12.216510 | orchestrator | 2025-10-09 10:43:12 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:43:12.218634 | orchestrator | 2025-10-09 10:43:12 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:43:12.218659 | orchestrator | 2025-10-09 10:43:12 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:15.264009 | orchestrator | 2025-10-09 10:43:15 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:43:15.267109 | orchestrator | 2025-10-09 10:43:15 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:43:15.269713 | orchestrator | 2025-10-09 10:43:15 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:43:15.271509 | orchestrator | 2025-10-09 10:43:15 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:43:15.271788 | orchestrator | 2025-10-09 10:43:15 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:18.320642 | orchestrator | 2025-10-09 10:43:18 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:43:18.321134 | orchestrator | 2025-10-09 10:43:18 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:43:18.323639 | orchestrator | 2025-10-09 10:43:18 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:43:18.324765 | orchestrator | 2025-10-09 10:43:18 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:43:18.324856 | orchestrator | 2025-10-09 10:43:18 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:21.366352 | orchestrator | 2025-10-09 10:43:21 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:43:21.368764 | orchestrator | 2025-10-09 10:43:21 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:43:21.369946 | orchestrator | 2025-10-09 10:43:21 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:43:21.372175 | orchestrator | 2025-10-09 10:43:21 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:43:21.372239 | orchestrator | 2025-10-09 10:43:21 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:24.403413 | orchestrator | 2025-10-09 10:43:24 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:43:24.404374 | orchestrator | 2025-10-09 10:43:24 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:43:24.406852 | orchestrator | 2025-10-09 10:43:24 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:43:24.408240 | orchestrator | 2025-10-09 10:43:24 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:43:24.408268 | orchestrator | 2025-10-09 10:43:24 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:27.534267 | orchestrator | 2025-10-09 10:43:27 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:43:27.534374 | orchestrator | 2025-10-09 10:43:27 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:43:27.534390 | orchestrator | 2025-10-09 10:43:27 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:43:27.534403 | orchestrator | 2025-10-09 10:43:27 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:43:27.534415 | orchestrator | 2025-10-09 10:43:27 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:30.537405 | orchestrator | 2025-10-09 10:43:30 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:43:30.537898 | orchestrator | 2025-10-09 10:43:30 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:43:30.538556 | orchestrator | 2025-10-09 10:43:30 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:43:30.539477 | orchestrator | 2025-10-09 10:43:30 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:43:30.540200 | orchestrator | 2025-10-09 10:43:30 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:33.576231 | orchestrator | 2025-10-09 10:43:33 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:43:33.578363 | orchestrator | 2025-10-09 10:43:33 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:43:33.579499 | orchestrator | 2025-10-09 10:43:33 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:43:33.580588 | orchestrator | 2025-10-09 10:43:33 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:43:33.580792 | orchestrator | 2025-10-09 10:43:33 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:36.629004 | orchestrator | 2025-10-09 10:43:36 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:43:36.631475 | orchestrator | 2025-10-09 10:43:36 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:43:36.634563 | orchestrator | 2025-10-09 10:43:36 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:43:36.637309 | orchestrator | 2025-10-09 10:43:36 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:43:36.637459 | orchestrator | 2025-10-09 10:43:36 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:39.664690 | orchestrator | 2025-10-09 10:43:39 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:43:39.664787 | orchestrator | 2025-10-09 10:43:39 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:43:39.665051 | orchestrator | 2025-10-09 10:43:39 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:43:39.666413 | orchestrator | 2025-10-09 10:43:39 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:43:39.666511 | orchestrator | 2025-10-09 10:43:39 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:42.701744 | orchestrator | 2025-10-09 10:43:42 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:43:42.701846 | orchestrator | 2025-10-09 10:43:42 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:43:42.702615 | orchestrator | 2025-10-09 10:43:42 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:43:42.703246 | orchestrator | 2025-10-09 10:43:42 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:43:42.703269 | orchestrator | 2025-10-09 10:43:42 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:45.743663 | orchestrator | 2025-10-09 10:43:45 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:43:45.743760 | orchestrator | 2025-10-09 10:43:45 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:43:45.745940 | orchestrator | 2025-10-09 10:43:45 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:43:45.746526 | orchestrator | 2025-10-09 10:43:45 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:43:45.746555 | orchestrator | 2025-10-09 10:43:45 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:48.781337 | orchestrator | 2025-10-09 10:43:48 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:43:48.783408 | orchestrator | 2025-10-09 10:43:48 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:43:48.785511 | orchestrator | 2025-10-09 10:43:48 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:43:48.787625 | orchestrator | 2025-10-09 10:43:48 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:43:48.787846 | orchestrator | 2025-10-09 10:43:48 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:51.836626 | orchestrator | 2025-10-09 10:43:51 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:43:51.838501 | orchestrator | 2025-10-09 10:43:51 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:43:51.841000 | orchestrator | 2025-10-09 10:43:51 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:43:51.844675 | orchestrator | 2025-10-09 10:43:51 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:43:51.844708 | orchestrator | 2025-10-09 10:43:51 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:54.871087 | orchestrator | 2025-10-09 10:43:54 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:43:54.871219 | orchestrator | 2025-10-09 10:43:54 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:43:54.872004 | orchestrator | 2025-10-09 10:43:54 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:43:54.872819 | orchestrator | 2025-10-09 10:43:54 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:43:54.872896 | orchestrator | 2025-10-09 10:43:54 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:43:57.906223 | orchestrator | 2025-10-09 10:43:57 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:43:57.906877 | orchestrator | 2025-10-09 10:43:57 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:43:57.910100 | orchestrator | 2025-10-09 10:43:57 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:43:57.912006 | orchestrator | 2025-10-09 10:43:57 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:43:57.912031 | orchestrator | 2025-10-09 10:43:57 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:00.943922 | orchestrator | 2025-10-09 10:44:00 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:44:00.944953 | orchestrator | 2025-10-09 10:44:00 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:44:00.945704 | orchestrator | 2025-10-09 10:44:00 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:44:00.946615 | orchestrator | 2025-10-09 10:44:00 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:44:00.946643 | orchestrator | 2025-10-09 10:44:00 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:03.971031 | orchestrator | 2025-10-09 10:44:03 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:44:03.971123 | orchestrator | 2025-10-09 10:44:03 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:44:03.971531 | orchestrator | 2025-10-09 10:44:03 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:44:03.972047 | orchestrator | 2025-10-09 10:44:03 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:44:03.972113 | orchestrator | 2025-10-09 10:44:03 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:07.000533 | orchestrator | 2025-10-09 10:44:06 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:44:07.002755 | orchestrator | 2025-10-09 10:44:07 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:44:07.006342 | orchestrator | 2025-10-09 10:44:07 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:44:07.008350 | orchestrator | 2025-10-09 10:44:07 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:44:07.008535 | orchestrator | 2025-10-09 10:44:07 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:10.055209 | orchestrator | 2025-10-09 10:44:10 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:44:10.055683 | orchestrator | 2025-10-09 10:44:10 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:44:10.056720 | orchestrator | 2025-10-09 10:44:10 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:44:10.058828 | orchestrator | 2025-10-09 10:44:10 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state STARTED 2025-10-09 10:44:10.058857 | orchestrator | 2025-10-09 10:44:10 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:13.096588 | orchestrator | 2025-10-09 10:44:13 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:44:13.098255 | orchestrator | 2025-10-09 10:44:13 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:44:13.101253 | orchestrator | 2025-10-09 10:44:13 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:44:13.103555 | orchestrator | 2025-10-09 10:44:13 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:44:13.105065 | orchestrator | 2025-10-09 10:44:13 | INFO  | Task 3bcc3474-1cd9-42ff-a5bc-22274c0f730a is in state SUCCESS 2025-10-09 10:44:13.105205 | orchestrator | 2025-10-09 10:44:13 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:16.150437 | orchestrator | 2025-10-09 10:44:16 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:44:16.150530 | orchestrator | 2025-10-09 10:44:16 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:44:16.154630 | orchestrator | 2025-10-09 10:44:16 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:44:16.157067 | orchestrator | 2025-10-09 10:44:16 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:44:16.158083 | orchestrator | 2025-10-09 10:44:16 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:19.202518 | orchestrator | 2025-10-09 10:44:19 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:44:19.204920 | orchestrator | 2025-10-09 10:44:19 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:44:19.206979 | orchestrator | 2025-10-09 10:44:19 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:44:19.210250 | orchestrator | 2025-10-09 10:44:19 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:44:19.210275 | orchestrator | 2025-10-09 10:44:19 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:22.263883 | orchestrator | 2025-10-09 10:44:22 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:44:22.264610 | orchestrator | 2025-10-09 10:44:22 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:44:22.266589 | orchestrator | 2025-10-09 10:44:22 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:44:22.267691 | orchestrator | 2025-10-09 10:44:22 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:44:22.267713 | orchestrator | 2025-10-09 10:44:22 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:25.512877 | orchestrator | 2025-10-09 10:44:25 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:44:25.513631 | orchestrator | 2025-10-09 10:44:25 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:44:25.515311 | orchestrator | 2025-10-09 10:44:25 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:44:25.520392 | orchestrator | 2025-10-09 10:44:25 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:44:25.520444 | orchestrator | 2025-10-09 10:44:25 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:28.575344 | orchestrator | 2025-10-09 10:44:28 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:44:28.577078 | orchestrator | 2025-10-09 10:44:28 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:44:28.579326 | orchestrator | 2025-10-09 10:44:28 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:44:28.581385 | orchestrator | 2025-10-09 10:44:28 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:44:28.581407 | orchestrator | 2025-10-09 10:44:28 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:31.634694 | orchestrator | 2025-10-09 10:44:31 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:44:31.636361 | orchestrator | 2025-10-09 10:44:31 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:44:31.638406 | orchestrator | 2025-10-09 10:44:31 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:44:31.639902 | orchestrator | 2025-10-09 10:44:31 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:44:31.640284 | orchestrator | 2025-10-09 10:44:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:34.686580 | orchestrator | 2025-10-09 10:44:34 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:44:34.686689 | orchestrator | 2025-10-09 10:44:34 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:44:34.689249 | orchestrator | 2025-10-09 10:44:34 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:44:34.691175 | orchestrator | 2025-10-09 10:44:34 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:44:34.691403 | orchestrator | 2025-10-09 10:44:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:37.739636 | orchestrator | 2025-10-09 10:44:37 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:44:37.741049 | orchestrator | 2025-10-09 10:44:37 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:44:37.743844 | orchestrator | 2025-10-09 10:44:37 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:44:37.746076 | orchestrator | 2025-10-09 10:44:37 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state STARTED 2025-10-09 10:44:37.746456 | orchestrator | 2025-10-09 10:44:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:40.800523 | orchestrator | 2025-10-09 10:44:40 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:44:40.802973 | orchestrator | 2025-10-09 10:44:40 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:44:40.806792 | orchestrator | 2025-10-09 10:44:40 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:44:40.812125 | orchestrator | 2025-10-09 10:44:40 | INFO  | Task 6aea6d9d-eebb-4c11-b865-296879428050 is in state SUCCESS 2025-10-09 10:44:40.812651 | orchestrator | 2025-10-09 10:44:40.812671 | orchestrator | 2025-10-09 10:44:40.812679 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-10-09 10:44:40.812688 | orchestrator | 2025-10-09 10:44:40.812695 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-10-09 10:44:40.812703 | orchestrator | Thursday 09 October 2025 10:42:43 +0000 (0:00:00.357) 0:00:00.357 ****** 2025-10-09 10:44:40.812711 | orchestrator | changed: [localhost] 2025-10-09 10:44:40.812720 | orchestrator | 2025-10-09 10:44:40.812727 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-10-09 10:44:40.812735 | orchestrator | Thursday 09 October 2025 10:42:45 +0000 (0:00:01.847) 0:00:02.204 ****** 2025-10-09 10:44:40.812742 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2025-10-09 10:44:40.812749 | orchestrator | changed: [localhost] 2025-10-09 10:44:40.812757 | orchestrator | 2025-10-09 10:44:40.812764 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-10-09 10:44:40.812771 | orchestrator | Thursday 09 October 2025 10:43:39 +0000 (0:00:54.044) 0:00:56.249 ****** 2025-10-09 10:44:40.812778 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2025-10-09 10:44:40.812786 | orchestrator | changed: [localhost] 2025-10-09 10:44:40.812793 | orchestrator | 2025-10-09 10:44:40.812800 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:44:40.812807 | orchestrator | 2025-10-09 10:44:40.812814 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:44:40.812849 | orchestrator | Thursday 09 October 2025 10:44:07 +0000 (0:00:28.066) 0:01:24.315 ****** 2025-10-09 10:44:40.812857 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:44:40.812864 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:44:40.812871 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:44:40.812878 | orchestrator | 2025-10-09 10:44:40.812885 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:44:40.812892 | orchestrator | Thursday 09 October 2025 10:44:08 +0000 (0:00:00.514) 0:01:24.830 ****** 2025-10-09 10:44:40.812899 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-10-09 10:44:40.812906 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-10-09 10:44:40.812914 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-10-09 10:44:40.812921 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-10-09 10:44:40.812928 | orchestrator | 2025-10-09 10:44:40.812935 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-10-09 10:44:40.812943 | orchestrator | skipping: no hosts matched 2025-10-09 10:44:40.812950 | orchestrator | 2025-10-09 10:44:40.812958 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:44:40.812965 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:44:40.812975 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:44:40.812984 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:44:40.812991 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:44:40.813011 | orchestrator | 2025-10-09 10:44:40.813019 | orchestrator | 2025-10-09 10:44:40.813026 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:44:40.813041 | orchestrator | Thursday 09 October 2025 10:44:09 +0000 (0:00:01.342) 0:01:26.173 ****** 2025-10-09 10:44:40.813048 | orchestrator | =============================================================================== 2025-10-09 10:44:40.813055 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 54.04s 2025-10-09 10:44:40.813063 | orchestrator | Download ironic-agent kernel ------------------------------------------- 28.07s 2025-10-09 10:44:40.813070 | orchestrator | Ensure the destination directory exists --------------------------------- 1.84s 2025-10-09 10:44:40.813077 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.34s 2025-10-09 10:44:40.813084 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.52s 2025-10-09 10:44:40.813091 | orchestrator | 2025-10-09 10:44:40.814660 | orchestrator | 2025-10-09 10:44:40.814699 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:44:40.814708 | orchestrator | 2025-10-09 10:44:40.814716 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:44:40.814723 | orchestrator | Thursday 09 October 2025 10:41:13 +0000 (0:00:00.275) 0:00:00.275 ****** 2025-10-09 10:44:40.814731 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:44:40.814739 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:44:40.814746 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:44:40.814753 | orchestrator | 2025-10-09 10:44:40.814761 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:44:40.814768 | orchestrator | Thursday 09 October 2025 10:41:14 +0000 (0:00:00.545) 0:00:00.820 ****** 2025-10-09 10:44:40.814775 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-10-09 10:44:40.814783 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-10-09 10:44:40.814794 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-10-09 10:44:40.814807 | orchestrator | 2025-10-09 10:44:40.814833 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-10-09 10:44:40.814847 | orchestrator | 2025-10-09 10:44:40.814860 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-10-09 10:44:40.814873 | orchestrator | Thursday 09 October 2025 10:41:14 +0000 (0:00:00.728) 0:00:01.548 ****** 2025-10-09 10:44:40.814885 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:44:40.814899 | orchestrator | 2025-10-09 10:44:40.814912 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-10-09 10:44:40.814925 | orchestrator | Thursday 09 October 2025 10:41:15 +0000 (0:00:00.664) 0:00:02.213 ****** 2025-10-09 10:44:40.814946 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-10-09 10:44:40.814958 | orchestrator | 2025-10-09 10:44:40.814970 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-10-09 10:44:40.814982 | orchestrator | Thursday 09 October 2025 10:41:19 +0000 (0:00:03.794) 0:00:06.008 ****** 2025-10-09 10:44:40.814995 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-10-09 10:44:40.815008 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-10-09 10:44:40.815021 | orchestrator | 2025-10-09 10:44:40.815035 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-10-09 10:44:40.815048 | orchestrator | Thursday 09 October 2025 10:41:26 +0000 (0:00:06.718) 0:00:12.727 ****** 2025-10-09 10:44:40.815062 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-09 10:44:40.815074 | orchestrator | 2025-10-09 10:44:40.815100 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-10-09 10:44:40.815112 | orchestrator | Thursday 09 October 2025 10:41:29 +0000 (0:00:03.294) 0:00:16.021 ****** 2025-10-09 10:44:40.815120 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-09 10:44:40.815147 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-10-09 10:44:40.815156 | orchestrator | 2025-10-09 10:44:40.815169 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-10-09 10:44:40.815176 | orchestrator | Thursday 09 October 2025 10:41:33 +0000 (0:00:04.052) 0:00:20.073 ****** 2025-10-09 10:44:40.815184 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-09 10:44:40.815191 | orchestrator | 2025-10-09 10:44:40.815198 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-10-09 10:44:40.815205 | orchestrator | Thursday 09 October 2025 10:41:37 +0000 (0:00:03.531) 0:00:23.605 ****** 2025-10-09 10:44:40.815213 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-10-09 10:44:40.815220 | orchestrator | 2025-10-09 10:44:40.815227 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-10-09 10:44:40.815234 | orchestrator | Thursday 09 October 2025 10:41:40 +0000 (0:00:03.956) 0:00:27.561 ****** 2025-10-09 10:44:40.815244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:44:40.815275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:44:40.815293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:44:40.815303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815473 | orchestrator | 2025-10-09 10:44:40.815481 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-10-09 10:44:40.815489 | orchestrator | Thursday 09 October 2025 10:41:45 +0000 (0:00:04.681) 0:00:32.243 ****** 2025-10-09 10:44:40.815503 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:40.815512 | orchestrator | 2025-10-09 10:44:40.815520 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-10-09 10:44:40.815530 | orchestrator | Thursday 09 October 2025 10:41:45 +0000 (0:00:00.258) 0:00:32.502 ****** 2025-10-09 10:44:40.815542 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:40.815551 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:40.815559 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:40.815568 | orchestrator | 2025-10-09 10:44:40.815576 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-10-09 10:44:40.815584 | orchestrator | Thursday 09 October 2025 10:41:46 +0000 (0:00:00.629) 0:00:33.131 ****** 2025-10-09 10:44:40.815592 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:44:40.815610 | orchestrator | 2025-10-09 10:44:40.815624 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-10-09 10:44:40.815637 | orchestrator | Thursday 09 October 2025 10:41:47 +0000 (0:00:01.029) 0:00:34.161 ****** 2025-10-09 10:44:40.815650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:44:40.815676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:44:40.815691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:44:40.815704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.815923 | orchestrator | 2025-10-09 10:44:40.815931 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-10-09 10:44:40.815938 | orchestrator | Thursday 09 October 2025 10:41:55 +0000 (0:00:07.738) 0:00:41.899 ****** 2025-10-09 10:44:40.815946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:44:40.815953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:44:40.815974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.815982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.815990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.815997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.816009 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:40.816017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:44:40.816030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:44:40.816046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.816055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.816062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.816070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.816084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:44:40.816092 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:40.816100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:44:40.816116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.816149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.816164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.816176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.816195 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:40.816207 | orchestrator | 2025-10-09 10:44:40.816219 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-10-09 10:44:40.816231 | orchestrator | Thursday 09 October 2025 10:41:56 +0000 (0:00:00.855) 0:00:42.754 ****** 2025-10-09 10:44:40.816244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:44:40.816265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:44:40.816291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.816305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.816319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.816340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.816353 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:40.816368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:44:40.816376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:44:40.816393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.816401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.816408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.816426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.816434 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:40.816447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:44:40.816460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:44:40.816471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.816488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.816497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.816511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.816523 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:40.816531 | orchestrator | 2025-10-09 10:44:40.816538 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-10-09 10:44:40.816545 | orchestrator | Thursday 09 October 2025 10:41:58 +0000 (0:00:02.359) 0:00:45.114 ****** 2025-10-09 10:44:40.816553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:44:40.816561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:44:40.816580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:44:40.816589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816743 | orchestrator | 2025-10-09 10:44:40.816751 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-10-09 10:44:40.816758 | orchestrator | Thursday 09 October 2025 10:42:06 +0000 (0:00:07.668) 0:00:52.783 ****** 2025-10-09 10:44:40.816766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:44:40.816774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:44:40.816781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:44:40.816797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.816942 | orchestrator | 2025-10-09 10:44:40.816950 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-10-09 10:44:40.816957 | orchestrator | Thursday 09 October 2025 10:42:31 +0000 (0:00:25.628) 0:01:18.412 ****** 2025-10-09 10:44:40.816964 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-10-09 10:44:40.816972 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-10-09 10:44:40.816979 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-10-09 10:44:40.816986 | orchestrator | 2025-10-09 10:44:40.816993 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-10-09 10:44:40.817000 | orchestrator | Thursday 09 October 2025 10:42:41 +0000 (0:00:09.943) 0:01:28.355 ****** 2025-10-09 10:44:40.817008 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-10-09 10:44:40.817015 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-10-09 10:44:40.817022 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-10-09 10:44:40.817029 | orchestrator | 2025-10-09 10:44:40.817037 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-10-09 10:44:40.817044 | orchestrator | Thursday 09 October 2025 10:42:46 +0000 (0:00:04.573) 0:01:32.929 ****** 2025-10-09 10:44:40.817051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:44:40.817059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:44:40.817080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:44:40.817088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817283 | orchestrator | 2025-10-09 10:44:40.817291 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-10-09 10:44:40.817298 | orchestrator | Thursday 09 October 2025 10:42:50 +0000 (0:00:03.930) 0:01:36.860 ****** 2025-10-09 10:44:40.817306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:44:40.817313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:44:40.817326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:44:40.817342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817485 | orchestrator | 2025-10-09 10:44:40.817492 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-10-09 10:44:40.817499 | orchestrator | Thursday 09 October 2025 10:42:53 +0000 (0:00:02.936) 0:01:39.796 ****** 2025-10-09 10:44:40.817507 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:40.817514 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:40.817521 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:40.817528 | orchestrator | 2025-10-09 10:44:40.817536 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-10-09 10:44:40.817543 | orchestrator | Thursday 09 October 2025 10:42:54 +0000 (0:00:01.072) 0:01:40.869 ****** 2025-10-09 10:44:40.817550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:44:40.817563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:44:40.817570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:44:40.817611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:44:40.817630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817638 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:40.817646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817677 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:40.817684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-10-09 10:44:40.817697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-10-09 10:44:40.817705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:44:40.817743 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:40.817750 | orchestrator | 2025-10-09 10:44:40.817758 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-10-09 10:44:40.817765 | orchestrator | Thursday 09 October 2025 10:42:55 +0000 (0:00:01.077) 0:01:41.946 ****** 2025-10-09 10:44:40.817772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:44:40.817787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:44:40.817795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-10-09 10:44:40.817803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:44:40.817950 | orchestrator | 2025-10-09 10:44:40.817958 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-10-09 10:44:40.817965 | orchestrator | Thursday 09 October 2025 10:43:01 +0000 (0:00:05.813) 0:01:47.759 ****** 2025-10-09 10:44:40.817972 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:40.817979 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:40.817987 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:40.817994 | orchestrator | 2025-10-09 10:44:40.818001 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-10-09 10:44:40.818013 | orchestrator | Thursday 09 October 2025 10:43:01 +0000 (0:00:00.585) 0:01:48.345 ****** 2025-10-09 10:44:40.818066 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-10-09 10:44:40.818074 | orchestrator | 2025-10-09 10:44:40.818081 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-10-09 10:44:40.818088 | orchestrator | Thursday 09 October 2025 10:43:04 +0000 (0:00:02.865) 0:01:51.210 ****** 2025-10-09 10:44:40.818096 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-09 10:44:40.818103 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-10-09 10:44:40.818110 | orchestrator | 2025-10-09 10:44:40.818117 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-10-09 10:44:40.818125 | orchestrator | Thursday 09 October 2025 10:43:07 +0000 (0:00:02.630) 0:01:53.840 ****** 2025-10-09 10:44:40.818178 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:44:40.818186 | orchestrator | 2025-10-09 10:44:40.818193 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-10-09 10:44:40.818200 | orchestrator | Thursday 09 October 2025 10:43:22 +0000 (0:00:15.636) 0:02:09.477 ****** 2025-10-09 10:44:40.818208 | orchestrator | 2025-10-09 10:44:40.818215 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-10-09 10:44:40.818222 | orchestrator | Thursday 09 October 2025 10:43:23 +0000 (0:00:00.783) 0:02:10.262 ****** 2025-10-09 10:44:40.818229 | orchestrator | 2025-10-09 10:44:40.818237 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-10-09 10:44:40.818244 | orchestrator | Thursday 09 October 2025 10:43:23 +0000 (0:00:00.166) 0:02:10.428 ****** 2025-10-09 10:44:40.818251 | orchestrator | 2025-10-09 10:44:40.818258 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-10-09 10:44:40.818265 | orchestrator | Thursday 09 October 2025 10:43:24 +0000 (0:00:00.235) 0:02:10.664 ****** 2025-10-09 10:44:40.818273 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:44:40.818280 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:44:40.818287 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:44:40.818294 | orchestrator | 2025-10-09 10:44:40.818302 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-10-09 10:44:40.818309 | orchestrator | Thursday 09 October 2025 10:43:38 +0000 (0:00:14.789) 0:02:25.453 ****** 2025-10-09 10:44:40.818316 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:44:40.818323 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:44:40.818331 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:44:40.818338 | orchestrator | 2025-10-09 10:44:40.818345 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-10-09 10:44:40.818352 | orchestrator | Thursday 09 October 2025 10:43:51 +0000 (0:00:12.464) 0:02:37.918 ****** 2025-10-09 10:44:40.818359 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:44:40.818367 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:44:40.818374 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:44:40.818381 | orchestrator | 2025-10-09 10:44:40.818388 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-10-09 10:44:40.818395 | orchestrator | Thursday 09 October 2025 10:43:58 +0000 (0:00:07.222) 0:02:45.140 ****** 2025-10-09 10:44:40.818403 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:44:40.818410 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:44:40.818417 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:44:40.818424 | orchestrator | 2025-10-09 10:44:40.818431 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-10-09 10:44:40.818439 | orchestrator | Thursday 09 October 2025 10:44:07 +0000 (0:00:09.293) 0:02:54.434 ****** 2025-10-09 10:44:40.818446 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:44:40.818453 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:44:40.818460 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:44:40.818467 | orchestrator | 2025-10-09 10:44:40.818474 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-10-09 10:44:40.818487 | orchestrator | Thursday 09 October 2025 10:44:21 +0000 (0:00:14.048) 0:03:08.483 ****** 2025-10-09 10:44:40.818494 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:44:40.818502 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:44:40.818509 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:44:40.818516 | orchestrator | 2025-10-09 10:44:40.818523 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-10-09 10:44:40.818530 | orchestrator | Thursday 09 October 2025 10:44:29 +0000 (0:00:08.067) 0:03:16.550 ****** 2025-10-09 10:44:40.818537 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:44:40.818545 | orchestrator | 2025-10-09 10:44:40.818552 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:44:40.818560 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-10-09 10:44:40.818567 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:44:40.818575 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:44:40.818582 | orchestrator | 2025-10-09 10:44:40.818589 | orchestrator | 2025-10-09 10:44:40.818601 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:44:40.818613 | orchestrator | Thursday 09 October 2025 10:44:37 +0000 (0:00:07.907) 0:03:24.458 ****** 2025-10-09 10:44:40.818620 | orchestrator | =============================================================================== 2025-10-09 10:44:40.818627 | orchestrator | designate : Copying over designate.conf -------------------------------- 25.63s 2025-10-09 10:44:40.818635 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.64s 2025-10-09 10:44:40.818642 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.79s 2025-10-09 10:44:40.818649 | orchestrator | designate : Restart designate-mdns container --------------------------- 14.05s 2025-10-09 10:44:40.818656 | orchestrator | designate : Restart designate-api container ---------------------------- 12.46s 2025-10-09 10:44:40.818663 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 9.94s 2025-10-09 10:44:40.818671 | orchestrator | designate : Restart designate-producer container ------------------------ 9.29s 2025-10-09 10:44:40.818678 | orchestrator | designate : Restart designate-worker container -------------------------- 8.07s 2025-10-09 10:44:40.818685 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.91s 2025-10-09 10:44:40.818692 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.74s 2025-10-09 10:44:40.818699 | orchestrator | designate : Copying over config.json files for services ----------------- 7.67s 2025-10-09 10:44:40.818707 | orchestrator | designate : Restart designate-central container ------------------------- 7.22s 2025-10-09 10:44:40.818714 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.72s 2025-10-09 10:44:40.818721 | orchestrator | designate : Check designate containers ---------------------------------- 5.81s 2025-10-09 10:44:40.818728 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.68s 2025-10-09 10:44:40.818735 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.57s 2025-10-09 10:44:40.818743 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.05s 2025-10-09 10:44:40.818750 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.96s 2025-10-09 10:44:40.818757 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.93s 2025-10-09 10:44:40.818764 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.80s 2025-10-09 10:44:40.818771 | orchestrator | 2025-10-09 10:44:40 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:44:40.818779 | orchestrator | 2025-10-09 10:44:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:43.864097 | orchestrator | 2025-10-09 10:44:43 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:44:43.864483 | orchestrator | 2025-10-09 10:44:43 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:44:43.865325 | orchestrator | 2025-10-09 10:44:43 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:44:43.866542 | orchestrator | 2025-10-09 10:44:43 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:44:43.866641 | orchestrator | 2025-10-09 10:44:43 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:46.893931 | orchestrator | 2025-10-09 10:44:46 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:44:46.896763 | orchestrator | 2025-10-09 10:44:46 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:44:46.897901 | orchestrator | 2025-10-09 10:44:46 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:44:46.900438 | orchestrator | 2025-10-09 10:44:46 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:44:46.900534 | orchestrator | 2025-10-09 10:44:46 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:49.953360 | orchestrator | 2025-10-09 10:44:49 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:44:49.956152 | orchestrator | 2025-10-09 10:44:49 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:44:49.957624 | orchestrator | 2025-10-09 10:44:49 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:44:49.960856 | orchestrator | 2025-10-09 10:44:49 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:44:49.960890 | orchestrator | 2025-10-09 10:44:49 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:53.005336 | orchestrator | 2025-10-09 10:44:53 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:44:53.005446 | orchestrator | 2025-10-09 10:44:53 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state STARTED 2025-10-09 10:44:53.008947 | orchestrator | 2025-10-09 10:44:53 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:44:53.008991 | orchestrator | 2025-10-09 10:44:53 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:44:53.009004 | orchestrator | 2025-10-09 10:44:53 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:56.059231 | orchestrator | 2025-10-09 10:44:56 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:44:56.061937 | orchestrator | 2025-10-09 10:44:56 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:44:56.067226 | orchestrator | 2025-10-09 10:44:56 | INFO  | Task cdc1c1f5-14ac-4edd-beb8-bf38112ce3db is in state SUCCESS 2025-10-09 10:44:56.069704 | orchestrator | 2025-10-09 10:44:56.069721 | orchestrator | 2025-10-09 10:44:56.069727 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:44:56.069733 | orchestrator | 2025-10-09 10:44:56.069738 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:44:56.069743 | orchestrator | Thursday 09 October 2025 10:40:15 +0000 (0:00:00.303) 0:00:00.303 ****** 2025-10-09 10:44:56.069748 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:44:56.069754 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:44:56.069759 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:44:56.069764 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:44:56.069769 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:44:56.069792 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:44:56.069797 | orchestrator | 2025-10-09 10:44:56.069802 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:44:56.069806 | orchestrator | Thursday 09 October 2025 10:40:16 +0000 (0:00:00.750) 0:00:01.053 ****** 2025-10-09 10:44:56.069811 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-10-09 10:44:56.069816 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-10-09 10:44:56.069821 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-10-09 10:44:56.069825 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-10-09 10:44:56.069830 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-10-09 10:44:56.069835 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-10-09 10:44:56.069839 | orchestrator | 2025-10-09 10:44:56.069844 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-10-09 10:44:56.069848 | orchestrator | 2025-10-09 10:44:56.069853 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-10-09 10:44:56.069858 | orchestrator | Thursday 09 October 2025 10:40:16 +0000 (0:00:00.656) 0:00:01.710 ****** 2025-10-09 10:44:56.069863 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:44:56.069870 | orchestrator | 2025-10-09 10:44:56.069874 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-10-09 10:44:56.069879 | orchestrator | Thursday 09 October 2025 10:40:18 +0000 (0:00:01.323) 0:00:03.033 ****** 2025-10-09 10:44:56.069884 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:44:56.069889 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:44:56.069893 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:44:56.069898 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:44:56.069903 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:44:56.069907 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:44:56.069912 | orchestrator | 2025-10-09 10:44:56.069916 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-10-09 10:44:56.069921 | orchestrator | Thursday 09 October 2025 10:40:19 +0000 (0:00:01.509) 0:00:04.543 ****** 2025-10-09 10:44:56.069926 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:44:56.069930 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:44:56.069935 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:44:56.069939 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:44:56.069944 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:44:56.069948 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:44:56.069953 | orchestrator | 2025-10-09 10:44:56.069957 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-10-09 10:44:56.069962 | orchestrator | Thursday 09 October 2025 10:40:20 +0000 (0:00:01.112) 0:00:05.656 ****** 2025-10-09 10:44:56.069967 | orchestrator | ok: [testbed-node-0] => { 2025-10-09 10:44:56.069972 | orchestrator |  "changed": false, 2025-10-09 10:44:56.069977 | orchestrator |  "msg": "All assertions passed" 2025-10-09 10:44:56.069981 | orchestrator | } 2025-10-09 10:44:56.069986 | orchestrator | ok: [testbed-node-1] => { 2025-10-09 10:44:56.069991 | orchestrator |  "changed": false, 2025-10-09 10:44:56.069995 | orchestrator |  "msg": "All assertions passed" 2025-10-09 10:44:56.070000 | orchestrator | } 2025-10-09 10:44:56.070004 | orchestrator | ok: [testbed-node-2] => { 2025-10-09 10:44:56.070009 | orchestrator |  "changed": false, 2025-10-09 10:44:56.070039 | orchestrator |  "msg": "All assertions passed" 2025-10-09 10:44:56.070044 | orchestrator | } 2025-10-09 10:44:56.070048 | orchestrator | ok: [testbed-node-3] => { 2025-10-09 10:44:56.070053 | orchestrator |  "changed": false, 2025-10-09 10:44:56.070058 | orchestrator |  "msg": "All assertions passed" 2025-10-09 10:44:56.070062 | orchestrator | } 2025-10-09 10:44:56.070067 | orchestrator | ok: [testbed-node-4] => { 2025-10-09 10:44:56.070071 | orchestrator |  "changed": false, 2025-10-09 10:44:56.070076 | orchestrator |  "msg": "All assertions passed" 2025-10-09 10:44:56.070085 | orchestrator | } 2025-10-09 10:44:56.070089 | orchestrator | ok: [testbed-node-5] => { 2025-10-09 10:44:56.070094 | orchestrator |  "changed": false, 2025-10-09 10:44:56.070099 | orchestrator |  "msg": "All assertions passed" 2025-10-09 10:44:56.070103 | orchestrator | } 2025-10-09 10:44:56.070108 | orchestrator | 2025-10-09 10:44:56.070112 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-10-09 10:44:56.070117 | orchestrator | Thursday 09 October 2025 10:40:22 +0000 (0:00:01.288) 0:00:06.944 ****** 2025-10-09 10:44:56.070122 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.070142 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.070147 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.070152 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.070156 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.070161 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.070166 | orchestrator | 2025-10-09 10:44:56.070170 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-10-09 10:44:56.070175 | orchestrator | Thursday 09 October 2025 10:40:23 +0000 (0:00:00.970) 0:00:07.914 ****** 2025-10-09 10:44:56.070189 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-10-09 10:44:56.070194 | orchestrator | 2025-10-09 10:44:56.070199 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-10-09 10:44:56.070203 | orchestrator | Thursday 09 October 2025 10:40:27 +0000 (0:00:04.097) 0:00:12.012 ****** 2025-10-09 10:44:56.070208 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-10-09 10:44:56.070214 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-10-09 10:44:56.070218 | orchestrator | 2025-10-09 10:44:56.070229 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-10-09 10:44:56.070234 | orchestrator | Thursday 09 October 2025 10:40:34 +0000 (0:00:07.101) 0:00:19.114 ****** 2025-10-09 10:44:56.070239 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-09 10:44:56.070244 | orchestrator | 2025-10-09 10:44:56.070248 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-10-09 10:44:56.070253 | orchestrator | Thursday 09 October 2025 10:40:37 +0000 (0:00:03.333) 0:00:22.447 ****** 2025-10-09 10:44:56.070257 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-09 10:44:56.070262 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-10-09 10:44:56.070267 | orchestrator | 2025-10-09 10:44:56.070271 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-10-09 10:44:56.070276 | orchestrator | Thursday 09 October 2025 10:40:41 +0000 (0:00:04.094) 0:00:26.542 ****** 2025-10-09 10:44:56.070280 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-09 10:44:56.070285 | orchestrator | 2025-10-09 10:44:56.070289 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-10-09 10:44:56.070294 | orchestrator | Thursday 09 October 2025 10:40:45 +0000 (0:00:03.555) 0:00:30.097 ****** 2025-10-09 10:44:56.070298 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-10-09 10:44:56.070303 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-10-09 10:44:56.070307 | orchestrator | 2025-10-09 10:44:56.070312 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-10-09 10:44:56.070317 | orchestrator | Thursday 09 October 2025 10:40:53 +0000 (0:00:08.173) 0:00:38.270 ****** 2025-10-09 10:44:56.070321 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.070326 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.070332 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.070337 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.070342 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.070347 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.070352 | orchestrator | 2025-10-09 10:44:56.070361 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-10-09 10:44:56.070366 | orchestrator | Thursday 09 October 2025 10:40:54 +0000 (0:00:00.862) 0:00:39.133 ****** 2025-10-09 10:44:56.070371 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.070391 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.070397 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.070402 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.070407 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.070412 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.070418 | orchestrator | 2025-10-09 10:44:56.070423 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-10-09 10:44:56.070428 | orchestrator | Thursday 09 October 2025 10:40:56 +0000 (0:00:02.305) 0:00:41.438 ****** 2025-10-09 10:44:56.070434 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:44:56.070439 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:44:56.070444 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:44:56.070449 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:44:56.070454 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:44:56.070460 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:44:56.070465 | orchestrator | 2025-10-09 10:44:56.070470 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-10-09 10:44:56.070475 | orchestrator | Thursday 09 October 2025 10:40:57 +0000 (0:00:01.201) 0:00:42.639 ****** 2025-10-09 10:44:56.070481 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.070518 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.070523 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.070528 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.070534 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.070539 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.070544 | orchestrator | 2025-10-09 10:44:56.070549 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-10-09 10:44:56.070555 | orchestrator | Thursday 09 October 2025 10:41:01 +0000 (0:00:03.762) 0:00:46.402 ****** 2025-10-09 10:44:56.070562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:44:56.070591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:44:56.070617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:44:56.070628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:44:56.070633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:44:56.070640 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:44:56.070645 | orchestrator | 2025-10-09 10:44:56.070650 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-10-09 10:44:56.070659 | orchestrator | Thursday 09 October 2025 10:41:04 +0000 (0:00:03.208) 0:00:49.610 ****** 2025-10-09 10:44:56.070683 | orchestrator | [WARNING]: Skipped 2025-10-09 10:44:56.070688 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-10-09 10:44:56.070693 | orchestrator | due to this access issue: 2025-10-09 10:44:56.070698 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-10-09 10:44:56.070702 | orchestrator | a directory 2025-10-09 10:44:56.070707 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:44:56.070711 | orchestrator | 2025-10-09 10:44:56.070716 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-10-09 10:44:56.070724 | orchestrator | Thursday 09 October 2025 10:41:06 +0000 (0:00:01.273) 0:00:50.883 ****** 2025-10-09 10:44:56.070733 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:44:56.070739 | orchestrator | 2025-10-09 10:44:56.070743 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-10-09 10:44:56.070748 | orchestrator | Thursday 09 October 2025 10:41:07 +0000 (0:00:01.493) 0:00:52.377 ****** 2025-10-09 10:44:56.070753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:44:56.070758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:44:56.070763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:44:56.070771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:44:56.070780 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:44:56.070790 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:44:56.070795 | orchestrator | 2025-10-09 10:44:56.070800 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-10-09 10:44:56.070804 | orchestrator | Thursday 09 October 2025 10:41:11 +0000 (0:00:04.422) 0:00:56.800 ****** 2025-10-09 10:44:56.070809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:44:56.070814 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.070819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:44:56.070824 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.070831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.070839 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.070848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.070853 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.070858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:44:56.070863 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.070867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.070872 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.070877 | orchestrator | 2025-10-09 10:44:56.070881 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-10-09 10:44:56.070886 | orchestrator | Thursday 09 October 2025 10:41:15 +0000 (0:00:03.677) 0:01:00.478 ****** 2025-10-09 10:44:56.070891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:44:56.070898 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.070913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:44:56.070918 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.070923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:44:56.070928 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.070933 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.070937 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.070942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.070947 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.070952 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.070959 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.070964 | orchestrator | 2025-10-09 10:44:56.070971 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-10-09 10:44:56.070976 | orchestrator | Thursday 09 October 2025 10:41:18 +0000 (0:00:03.006) 0:01:03.485 ****** 2025-10-09 10:44:56.070980 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.070985 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.070989 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.070994 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.070998 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.071003 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.071007 | orchestrator | 2025-10-09 10:44:56.071012 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-10-09 10:44:56.071019 | orchestrator | Thursday 09 October 2025 10:41:20 +0000 (0:00:02.331) 0:01:05.816 ****** 2025-10-09 10:44:56.071024 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.071028 | orchestrator | 2025-10-09 10:44:56.071033 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-10-09 10:44:56.071038 | orchestrator | Thursday 09 October 2025 10:41:21 +0000 (0:00:00.122) 0:01:05.939 ****** 2025-10-09 10:44:56.071042 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.071047 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.071051 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.071056 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.071060 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.071065 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.071069 | orchestrator | 2025-10-09 10:44:56.071074 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-10-09 10:44:56.071078 | orchestrator | Thursday 09 October 2025 10:41:21 +0000 (0:00:00.880) 0:01:06.819 ****** 2025-10-09 10:44:56.071083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:44:56.071088 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.071092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:44:56.071100 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.071105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:44:56.071110 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.071256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.071265 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.071269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.071274 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.071279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.071284 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.071288 | orchestrator | 2025-10-09 10:44:56.071293 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-10-09 10:44:56.071297 | orchestrator | Thursday 09 October 2025 10:41:24 +0000 (0:00:02.236) 0:01:09.056 ****** 2025-10-09 10:44:56.071307 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:44:56.071312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:44:56.071324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:44:56.071330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:44:56.071335 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:44:56.071344 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:44:56.071348 | orchestrator | 2025-10-09 10:44:56.071353 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-10-09 10:44:56.071358 | orchestrator | Thursday 09 October 2025 10:41:29 +0000 (0:00:05.307) 0:01:14.363 ****** 2025-10-09 10:44:56.071365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:44:56.071373 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:44:56.071379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:44:56.071383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:44:56.071391 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:44:56.071396 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:44:56.071401 | orchestrator | 2025-10-09 10:44:56.071408 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-10-09 10:44:56.071413 | orchestrator | Thursday 09 October 2025 10:41:37 +0000 (0:00:07.859) 0:01:22.222 ****** 2025-10-09 10:44:56.071423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:44:56.071428 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.071432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:44:56.071440 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.071445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.071450 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.071455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:44:56.071460 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.071468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.071473 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.071480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.071485 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.071489 | orchestrator | 2025-10-09 10:44:56.071494 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-10-09 10:44:56.071499 | orchestrator | Thursday 09 October 2025 10:41:40 +0000 (0:00:03.512) 0:01:25.735 ****** 2025-10-09 10:44:56.071506 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.071511 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.071516 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.071520 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:44:56.071525 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:44:56.071529 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:44:56.071534 | orchestrator | 2025-10-09 10:44:56.071538 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-10-09 10:44:56.071543 | orchestrator | Thursday 09 October 2025 10:41:45 +0000 (0:00:04.568) 0:01:30.303 ****** 2025-10-09 10:44:56.071548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.071553 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.071557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.071562 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.071567 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.071574 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.071583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:44:56.071593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:44:56.071598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:44:56.071603 | orchestrator | 2025-10-09 10:44:56.071608 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-10-09 10:44:56.071612 | orchestrator | Thursday 09 October 2025 10:41:50 +0000 (0:00:05.400) 0:01:35.704 ****** 2025-10-09 10:44:56.071631 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.071636 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.071640 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.071645 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.071649 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.071654 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.071659 | orchestrator | 2025-10-09 10:44:56.071664 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-10-09 10:44:56.071668 | orchestrator | Thursday 09 October 2025 10:41:53 +0000 (0:00:02.538) 0:01:38.242 ****** 2025-10-09 10:44:56.071673 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.071677 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.071682 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.071687 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.071691 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.071696 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.071700 | orchestrator | 2025-10-09 10:44:56.071705 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-10-09 10:44:56.071710 | orchestrator | Thursday 09 October 2025 10:41:56 +0000 (0:00:02.656) 0:01:40.899 ****** 2025-10-09 10:44:56.071714 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.071719 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.071724 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.071728 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.071733 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.071737 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.071742 | orchestrator | 2025-10-09 10:44:56.071747 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-10-09 10:44:56.071751 | orchestrator | Thursday 09 October 2025 10:41:59 +0000 (0:00:03.047) 0:01:43.947 ****** 2025-10-09 10:44:56.071760 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.071768 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.071773 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.071777 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.071782 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.071786 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.071791 | orchestrator | 2025-10-09 10:44:56.071796 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-10-09 10:44:56.071800 | orchestrator | Thursday 09 October 2025 10:42:02 +0000 (0:00:03.247) 0:01:47.194 ****** 2025-10-09 10:44:56.071805 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.071810 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.071814 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.071819 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.071826 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.071831 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.071836 | orchestrator | 2025-10-09 10:44:56.071841 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-10-09 10:44:56.071845 | orchestrator | Thursday 09 October 2025 10:42:06 +0000 (0:00:04.526) 0:01:51.720 ****** 2025-10-09 10:44:56.071850 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.071854 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.071859 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.071864 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.071868 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.071873 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.071877 | orchestrator | 2025-10-09 10:44:56.071882 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-10-09 10:44:56.071886 | orchestrator | Thursday 09 October 2025 10:42:10 +0000 (0:00:03.493) 0:01:55.214 ****** 2025-10-09 10:44:56.071892 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-10-09 10:44:56.071897 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.071902 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-10-09 10:44:56.071907 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.071912 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-10-09 10:44:56.071918 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.071923 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-10-09 10:44:56.071928 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.071933 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-10-09 10:44:56.071938 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.071943 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-10-09 10:44:56.071948 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.071954 | orchestrator | 2025-10-09 10:44:56.071959 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-10-09 10:44:56.071964 | orchestrator | Thursday 09 October 2025 10:42:14 +0000 (0:00:04.281) 0:01:59.495 ****** 2025-10-09 10:44:56.071969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:44:56.071979 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.071984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:44:56.071990 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.072001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:44:56.072006 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.072012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.072017 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.072022 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.072028 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.072033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.072043 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.072048 | orchestrator | 2025-10-09 10:44:56.072053 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-10-09 10:44:56.072059 | orchestrator | Thursday 09 October 2025 10:42:18 +0000 (0:00:03.475) 0:02:02.971 ****** 2025-10-09 10:44:56.072067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:44:56.072072 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.072207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:44:56.072215 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.072221 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.072226 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.072232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:44:56.072244 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.072249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.072254 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.072262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.072267 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.072271 | orchestrator | 2025-10-09 10:44:56.072276 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-10-09 10:44:56.072280 | orchestrator | Thursday 09 October 2025 10:42:23 +0000 (0:00:04.988) 0:02:07.959 ****** 2025-10-09 10:44:56.072285 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.072292 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.072297 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.072301 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.072306 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.072310 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.072315 | orchestrator | 2025-10-09 10:44:56.072320 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-10-09 10:44:56.072324 | orchestrator | Thursday 09 October 2025 10:42:27 +0000 (0:00:03.932) 0:02:11.891 ****** 2025-10-09 10:44:56.072329 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.072333 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.072338 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.072342 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:44:56.072347 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:44:56.072351 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:44:56.072356 | orchestrator | 2025-10-09 10:44:56.072360 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-10-09 10:44:56.072365 | orchestrator | Thursday 09 October 2025 10:42:31 +0000 (0:00:04.812) 0:02:16.704 ****** 2025-10-09 10:44:56.072369 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.072374 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.072379 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.072387 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.072392 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.072396 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.072401 | orchestrator | 2025-10-09 10:44:56.072405 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-10-09 10:44:56.072410 | orchestrator | Thursday 09 October 2025 10:42:37 +0000 (0:00:05.224) 0:02:21.928 ****** 2025-10-09 10:44:56.072415 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.072419 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.072423 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.072428 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.072432 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.072437 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.072441 | orchestrator | 2025-10-09 10:44:56.072446 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-10-09 10:44:56.072450 | orchestrator | Thursday 09 October 2025 10:42:42 +0000 (0:00:05.160) 0:02:27.089 ****** 2025-10-09 10:44:56.072455 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.072459 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.072464 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.072468 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.072473 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.072477 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.072482 | orchestrator | 2025-10-09 10:44:56.072486 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-10-09 10:44:56.072491 | orchestrator | Thursday 09 October 2025 10:42:47 +0000 (0:00:04.947) 0:02:32.036 ****** 2025-10-09 10:44:56.072495 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.072500 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.072505 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.072509 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.072514 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.072518 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.072523 | orchestrator | 2025-10-09 10:44:56.072527 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-10-09 10:44:56.072532 | orchestrator | Thursday 09 October 2025 10:42:50 +0000 (0:00:03.272) 0:02:35.309 ****** 2025-10-09 10:44:56.072536 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.072541 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.072545 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.072550 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.072554 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.072559 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.072563 | orchestrator | 2025-10-09 10:44:56.072568 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-10-09 10:44:56.072572 | orchestrator | Thursday 09 October 2025 10:42:52 +0000 (0:00:02.113) 0:02:37.423 ****** 2025-10-09 10:44:56.072577 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.072581 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.072586 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.072590 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.072595 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.072599 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.072604 | orchestrator | 2025-10-09 10:44:56.072609 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-10-09 10:44:56.072613 | orchestrator | Thursday 09 October 2025 10:42:55 +0000 (0:00:02.757) 0:02:40.180 ****** 2025-10-09 10:44:56.072618 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.072622 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.072627 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.072631 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.072636 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.072640 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.072649 | orchestrator | 2025-10-09 10:44:56.072653 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-10-09 10:44:56.072658 | orchestrator | Thursday 09 October 2025 10:42:57 +0000 (0:00:02.457) 0:02:42.637 ****** 2025-10-09 10:44:56.072663 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-10-09 10:44:56.072667 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.072674 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-10-09 10:44:56.072679 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.072683 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-10-09 10:44:56.072688 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.072693 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-10-09 10:44:56.072697 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.072704 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-10-09 10:44:56.072708 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.072713 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-10-09 10:44:56.072718 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.072722 | orchestrator | 2025-10-09 10:44:56.072727 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-10-09 10:44:56.072731 | orchestrator | Thursday 09 October 2025 10:42:59 +0000 (0:00:02.182) 0:02:44.820 ****** 2025-10-09 10:44:56.072736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:44:56.072741 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.072745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:44:56.072750 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.072755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.072763 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.072771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-10-09 10:44:56.072776 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.072783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.072788 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.072793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-10-09 10:44:56.072798 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.072802 | orchestrator | 2025-10-09 10:44:56.072807 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-10-09 10:44:56.072811 | orchestrator | Thursday 09 October 2025 10:43:02 +0000 (0:00:02.597) 0:02:47.417 ****** 2025-10-09 10:44:56.072816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:44:56.072825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:44:56.072837 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:44:56.072842 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:44:56.072847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-10-09 10:44:56.072852 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-10-09 10:44:56.072862 | orchestrator | 2025-10-09 10:44:56.072866 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-10-09 10:44:56.072871 | orchestrator | Thursday 09 October 2025 10:43:05 +0000 (0:00:03.410) 0:02:50.828 ****** 2025-10-09 10:44:56.072876 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:44:56.072880 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:44:56.072885 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:44:56.072889 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:44:56.072894 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:44:56.072898 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:44:56.072903 | orchestrator | 2025-10-09 10:44:56.072907 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-10-09 10:44:56.072912 | orchestrator | Thursday 09 October 2025 10:43:06 +0000 (0:00:00.521) 0:02:51.349 ****** 2025-10-09 10:44:56.072917 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:44:56.072921 | orchestrator | 2025-10-09 10:44:56.072926 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-10-09 10:44:56.072930 | orchestrator | Thursday 09 October 2025 10:43:08 +0000 (0:00:02.119) 0:02:53.469 ****** 2025-10-09 10:44:56.072935 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:44:56.072939 | orchestrator | 2025-10-09 10:44:56.072944 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-10-09 10:44:56.072948 | orchestrator | Thursday 09 October 2025 10:43:10 +0000 (0:00:02.029) 0:02:55.498 ****** 2025-10-09 10:44:56.072953 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:44:56.072958 | orchestrator | 2025-10-09 10:44:56.072962 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-10-09 10:44:56.072967 | orchestrator | Thursday 09 October 2025 10:43:53 +0000 (0:00:43.029) 0:03:38.528 ****** 2025-10-09 10:44:56.072971 | orchestrator | 2025-10-09 10:44:56.072979 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-10-09 10:44:56.072984 | orchestrator | Thursday 09 October 2025 10:43:53 +0000 (0:00:00.104) 0:03:38.633 ****** 2025-10-09 10:44:56.072988 | orchestrator | 2025-10-09 10:44:56.072993 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-10-09 10:44:56.072997 | orchestrator | Thursday 09 October 2025 10:43:54 +0000 (0:00:00.632) 0:03:39.266 ****** 2025-10-09 10:44:56.073002 | orchestrator | 2025-10-09 10:44:56.073006 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-10-09 10:44:56.073011 | orchestrator | Thursday 09 October 2025 10:43:54 +0000 (0:00:00.192) 0:03:39.459 ****** 2025-10-09 10:44:56.073016 | orchestrator | 2025-10-09 10:44:56.073022 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-10-09 10:44:56.073027 | orchestrator | Thursday 09 October 2025 10:43:54 +0000 (0:00:00.117) 0:03:39.576 ****** 2025-10-09 10:44:56.073032 | orchestrator | 2025-10-09 10:44:56.073036 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-10-09 10:44:56.073041 | orchestrator | Thursday 09 October 2025 10:43:54 +0000 (0:00:00.138) 0:03:39.715 ****** 2025-10-09 10:44:56.073045 | orchestrator | 2025-10-09 10:44:56.073050 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-10-09 10:44:56.073054 | orchestrator | Thursday 09 October 2025 10:43:55 +0000 (0:00:00.162) 0:03:39.878 ****** 2025-10-09 10:44:56.073059 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:44:56.073063 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:44:56.073068 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:44:56.073073 | orchestrator | 2025-10-09 10:44:56.073077 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-10-09 10:44:56.073082 | orchestrator | Thursday 09 October 2025 10:44:29 +0000 (0:00:34.828) 0:04:14.706 ****** 2025-10-09 10:44:56.073086 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:44:56.073095 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:44:56.073099 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:44:56.073104 | orchestrator | 2025-10-09 10:44:56.073109 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:44:56.073113 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-10-09 10:44:56.073119 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-10-09 10:44:56.073124 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-10-09 10:44:56.073157 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-10-09 10:44:56.073162 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-10-09 10:44:56.073166 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-10-09 10:44:56.073171 | orchestrator | 2025-10-09 10:44:56.073176 | orchestrator | 2025-10-09 10:44:56.073180 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:44:56.073185 | orchestrator | Thursday 09 October 2025 10:44:53 +0000 (0:00:23.850) 0:04:38.556 ****** 2025-10-09 10:44:56.073190 | orchestrator | =============================================================================== 2025-10-09 10:44:56.073194 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 43.03s 2025-10-09 10:44:56.073199 | orchestrator | neutron : Restart neutron-server container ----------------------------- 34.83s 2025-10-09 10:44:56.073203 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 23.85s 2025-10-09 10:44:56.073208 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.17s 2025-10-09 10:44:56.073212 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.86s 2025-10-09 10:44:56.073217 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.10s 2025-10-09 10:44:56.073221 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.40s 2025-10-09 10:44:56.073226 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.31s 2025-10-09 10:44:56.073231 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 5.22s 2025-10-09 10:44:56.073235 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 5.16s 2025-10-09 10:44:56.073240 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 4.99s 2025-10-09 10:44:56.073244 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 4.95s 2025-10-09 10:44:56.073249 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.81s 2025-10-09 10:44:56.073253 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 4.57s 2025-10-09 10:44:56.073258 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 4.53s 2025-10-09 10:44:56.073263 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.42s 2025-10-09 10:44:56.073267 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------- 4.28s 2025-10-09 10:44:56.073272 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 4.10s 2025-10-09 10:44:56.073279 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.09s 2025-10-09 10:44:56.073284 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 3.93s 2025-10-09 10:44:56.073288 | orchestrator | 2025-10-09 10:44:56 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:44:56.074538 | orchestrator | 2025-10-09 10:44:56 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:44:56.074548 | orchestrator | 2025-10-09 10:44:56 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:44:59.112077 | orchestrator | 2025-10-09 10:44:59 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:44:59.116713 | orchestrator | 2025-10-09 10:44:59 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:44:59.120172 | orchestrator | 2025-10-09 10:44:59 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:44:59.122565 | orchestrator | 2025-10-09 10:44:59 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:44:59.123634 | orchestrator | 2025-10-09 10:44:59 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:02.161977 | orchestrator | 2025-10-09 10:45:02 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:45:02.164596 | orchestrator | 2025-10-09 10:45:02 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:45:02.167120 | orchestrator | 2025-10-09 10:45:02 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:45:02.169270 | orchestrator | 2025-10-09 10:45:02 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:45:02.169293 | orchestrator | 2025-10-09 10:45:02 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:05.202758 | orchestrator | 2025-10-09 10:45:05 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:45:05.204319 | orchestrator | 2025-10-09 10:45:05 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:45:05.223115 | orchestrator | 2025-10-09 10:45:05 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:45:05.225025 | orchestrator | 2025-10-09 10:45:05 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:45:05.225051 | orchestrator | 2025-10-09 10:45:05 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:08.270555 | orchestrator | 2025-10-09 10:45:08 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:45:08.270759 | orchestrator | 2025-10-09 10:45:08 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:45:08.271544 | orchestrator | 2025-10-09 10:45:08 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:45:08.272317 | orchestrator | 2025-10-09 10:45:08 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:45:08.272414 | orchestrator | 2025-10-09 10:45:08 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:11.318535 | orchestrator | 2025-10-09 10:45:11 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:45:11.319722 | orchestrator | 2025-10-09 10:45:11 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:45:11.320200 | orchestrator | 2025-10-09 10:45:11 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:45:11.321240 | orchestrator | 2025-10-09 10:45:11 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:45:11.321263 | orchestrator | 2025-10-09 10:45:11 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:14.368052 | orchestrator | 2025-10-09 10:45:14 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:45:14.370837 | orchestrator | 2025-10-09 10:45:14 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:45:14.372597 | orchestrator | 2025-10-09 10:45:14 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:45:14.374684 | orchestrator | 2025-10-09 10:45:14 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:45:14.374987 | orchestrator | 2025-10-09 10:45:14 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:17.424305 | orchestrator | 2025-10-09 10:45:17 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:45:17.425097 | orchestrator | 2025-10-09 10:45:17 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:45:17.426701 | orchestrator | 2025-10-09 10:45:17 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:45:17.428035 | orchestrator | 2025-10-09 10:45:17 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:45:17.428882 | orchestrator | 2025-10-09 10:45:17 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:20.475762 | orchestrator | 2025-10-09 10:45:20 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:45:20.476820 | orchestrator | 2025-10-09 10:45:20 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:45:20.477840 | orchestrator | 2025-10-09 10:45:20 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:45:20.478888 | orchestrator | 2025-10-09 10:45:20 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:45:20.479628 | orchestrator | 2025-10-09 10:45:20 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:23.524906 | orchestrator | 2025-10-09 10:45:23 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:45:23.528632 | orchestrator | 2025-10-09 10:45:23 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:45:23.532025 | orchestrator | 2025-10-09 10:45:23 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:45:23.534608 | orchestrator | 2025-10-09 10:45:23 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:45:23.534638 | orchestrator | 2025-10-09 10:45:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:26.571763 | orchestrator | 2025-10-09 10:45:26 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:45:26.572860 | orchestrator | 2025-10-09 10:45:26 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:45:26.575616 | orchestrator | 2025-10-09 10:45:26 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:45:26.576240 | orchestrator | 2025-10-09 10:45:26 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:45:26.576734 | orchestrator | 2025-10-09 10:45:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:29.626633 | orchestrator | 2025-10-09 10:45:29 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:45:29.630677 | orchestrator | 2025-10-09 10:45:29 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state STARTED 2025-10-09 10:45:29.635749 | orchestrator | 2025-10-09 10:45:29 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:45:29.639314 | orchestrator | 2025-10-09 10:45:29 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:45:29.639651 | orchestrator | 2025-10-09 10:45:29 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:32.673578 | orchestrator | 2025-10-09 10:45:32 | INFO  | Task fe64a778-49ad-47f7-81f4-543212a9795b is in state STARTED 2025-10-09 10:45:32.675320 | orchestrator | 2025-10-09 10:45:32 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:45:32.677540 | orchestrator | 2025-10-09 10:45:32 | INFO  | Task dcf1f1dc-b6e2-48c1-872b-ea8f2debc50e is in state SUCCESS 2025-10-09 10:45:32.679935 | orchestrator | 2025-10-09 10:45:32.679970 | orchestrator | 2025-10-09 10:45:32.679982 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:45:32.679994 | orchestrator | 2025-10-09 10:45:32.680006 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:45:32.680018 | orchestrator | Thursday 09 October 2025 10:44:15 +0000 (0:00:00.359) 0:00:00.359 ****** 2025-10-09 10:45:32.680030 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:45:32.680042 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:45:32.680053 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:45:32.680064 | orchestrator | 2025-10-09 10:45:32.680076 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:45:32.680088 | orchestrator | Thursday 09 October 2025 10:44:15 +0000 (0:00:00.304) 0:00:00.664 ****** 2025-10-09 10:45:32.680099 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-10-09 10:45:32.680111 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-10-09 10:45:32.680172 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-10-09 10:45:32.680185 | orchestrator | 2025-10-09 10:45:32.680196 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-10-09 10:45:32.680207 | orchestrator | 2025-10-09 10:45:32.680218 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-10-09 10:45:32.680230 | orchestrator | Thursday 09 October 2025 10:44:16 +0000 (0:00:00.461) 0:00:01.126 ****** 2025-10-09 10:45:32.680259 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:45:32.680272 | orchestrator | 2025-10-09 10:45:32.680283 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-10-09 10:45:32.680294 | orchestrator | Thursday 09 October 2025 10:44:16 +0000 (0:00:00.557) 0:00:01.683 ****** 2025-10-09 10:45:32.680305 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-10-09 10:45:32.680316 | orchestrator | 2025-10-09 10:45:32.680327 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-10-09 10:45:32.680339 | orchestrator | Thursday 09 October 2025 10:44:20 +0000 (0:00:03.819) 0:00:05.502 ****** 2025-10-09 10:45:32.680349 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-10-09 10:45:32.680361 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-10-09 10:45:32.680372 | orchestrator | 2025-10-09 10:45:32.680383 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-10-09 10:45:32.680395 | orchestrator | Thursday 09 October 2025 10:44:27 +0000 (0:00:07.361) 0:00:12.864 ****** 2025-10-09 10:45:32.680406 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-09 10:45:32.680417 | orchestrator | 2025-10-09 10:45:32.680428 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-10-09 10:45:32.680439 | orchestrator | Thursday 09 October 2025 10:44:31 +0000 (0:00:03.801) 0:00:16.665 ****** 2025-10-09 10:45:32.680450 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-09 10:45:32.680461 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-10-09 10:45:32.680472 | orchestrator | 2025-10-09 10:45:32.680483 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-10-09 10:45:32.680494 | orchestrator | Thursday 09 October 2025 10:44:36 +0000 (0:00:04.465) 0:00:21.131 ****** 2025-10-09 10:45:32.680532 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-09 10:45:32.680545 | orchestrator | 2025-10-09 10:45:32.680558 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-10-09 10:45:32.680570 | orchestrator | Thursday 09 October 2025 10:44:39 +0000 (0:00:03.667) 0:00:24.798 ****** 2025-10-09 10:45:32.680582 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-10-09 10:45:32.680594 | orchestrator | 2025-10-09 10:45:32.680606 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-10-09 10:45:32.680619 | orchestrator | Thursday 09 October 2025 10:44:44 +0000 (0:00:04.403) 0:00:29.202 ****** 2025-10-09 10:45:32.680631 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:32.680643 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:32.680655 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:32.680668 | orchestrator | 2025-10-09 10:45:32.680680 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-10-09 10:45:32.680692 | orchestrator | Thursday 09 October 2025 10:44:44 +0000 (0:00:00.512) 0:00:29.715 ****** 2025-10-09 10:45:32.680708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:32.680737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:32.680758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:32.680771 | orchestrator | 2025-10-09 10:45:32.680784 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-10-09 10:45:32.680796 | orchestrator | Thursday 09 October 2025 10:44:45 +0000 (0:00:01.008) 0:00:30.723 ****** 2025-10-09 10:45:32.680816 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:32.680828 | orchestrator | 2025-10-09 10:45:32.680840 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-10-09 10:45:32.680852 | orchestrator | Thursday 09 October 2025 10:44:45 +0000 (0:00:00.156) 0:00:30.880 ****** 2025-10-09 10:45:32.680865 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:32.680877 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:32.680888 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:32.680899 | orchestrator | 2025-10-09 10:45:32.680910 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-10-09 10:45:32.680921 | orchestrator | Thursday 09 October 2025 10:44:46 +0000 (0:00:00.581) 0:00:31.462 ****** 2025-10-09 10:45:32.680932 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:45:32.680943 | orchestrator | 2025-10-09 10:45:32.680953 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-10-09 10:45:32.680964 | orchestrator | Thursday 09 October 2025 10:44:47 +0000 (0:00:00.574) 0:00:32.037 ****** 2025-10-09 10:45:32.680976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:32.680996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:32.681013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:32.681025 | orchestrator | 2025-10-09 10:45:32.681036 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-10-09 10:45:32.681054 | orchestrator | Thursday 09 October 2025 10:44:48 +0000 (0:00:01.722) 0:00:33.759 ****** 2025-10-09 10:45:32.681066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:45:32.681078 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:32.681089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:45:32.681173 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:32.681193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:45:32.681205 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:32.681216 | orchestrator | 2025-10-09 10:45:32.681227 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-10-09 10:45:32.681238 | orchestrator | Thursday 09 October 2025 10:44:49 +0000 (0:00:01.038) 0:00:34.797 ****** 2025-10-09 10:45:32.681284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:45:32.681305 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:32.681316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:45:32.681328 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:32.681339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:45:32.681350 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:32.681361 | orchestrator | 2025-10-09 10:45:32.681372 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-10-09 10:45:32.681383 | orchestrator | Thursday 09 October 2025 10:44:50 +0000 (0:00:00.791) 0:00:35.588 ****** 2025-10-09 10:45:32.681400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:32.681412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:32.681437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:32.681449 | orchestrator | 2025-10-09 10:45:32.681460 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-10-09 10:45:32.681471 | orchestrator | Thursday 09 October 2025 10:44:52 +0000 (0:00:01.545) 0:00:37.134 ****** 2025-10-09 10:45:32.681482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:32.681494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:32.681514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:32.681533 | orchestrator | 2025-10-09 10:45:32.681545 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-10-09 10:45:32.681556 | orchestrator | Thursday 09 October 2025 10:44:54 +0000 (0:00:02.577) 0:00:39.712 ****** 2025-10-09 10:45:32.681567 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-10-09 10:45:32.681578 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-10-09 10:45:32.681594 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-10-09 10:45:32.681605 | orchestrator | 2025-10-09 10:45:32.681616 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-10-09 10:45:32.681627 | orchestrator | Thursday 09 October 2025 10:44:56 +0000 (0:00:01.557) 0:00:41.269 ****** 2025-10-09 10:45:32.681638 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:32.681649 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:45:32.681660 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:45:32.681671 | orchestrator | 2025-10-09 10:45:32.681682 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-10-09 10:45:32.681693 | orchestrator | Thursday 09 October 2025 10:44:57 +0000 (0:00:01.510) 0:00:42.780 ****** 2025-10-09 10:45:32.681704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:45:32.681715 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:45:32.681727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:45:32.681738 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:45:32.681756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-10-09 10:45:32.681782 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:45:32.681793 | orchestrator | 2025-10-09 10:45:32.681804 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-10-09 10:45:32.681815 | orchestrator | Thursday 09 October 2025 10:44:58 +0000 (0:00:00.508) 0:00:43.289 ****** 2025-10-09 10:45:32.681831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:32.681844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:32.681855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-10-09 10:45:32.681867 | orchestrator | 2025-10-09 10:45:32.681878 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-10-09 10:45:32.681889 | orchestrator | Thursday 09 October 2025 10:44:59 +0000 (0:00:01.149) 0:00:44.439 ****** 2025-10-09 10:45:32.681900 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:32.681910 | orchestrator | 2025-10-09 10:45:32.681921 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-10-09 10:45:32.681932 | orchestrator | Thursday 09 October 2025 10:45:02 +0000 (0:00:02.821) 0:00:47.261 ****** 2025-10-09 10:45:32.681943 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:32.681962 | orchestrator | 2025-10-09 10:45:32.681973 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-10-09 10:45:32.681984 | orchestrator | Thursday 09 October 2025 10:45:05 +0000 (0:00:02.735) 0:00:49.996 ****** 2025-10-09 10:45:32.681995 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:32.682006 | orchestrator | 2025-10-09 10:45:32.682062 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-10-09 10:45:32.682076 | orchestrator | Thursday 09 October 2025 10:45:19 +0000 (0:00:14.818) 0:01:04.814 ****** 2025-10-09 10:45:32.682088 | orchestrator | 2025-10-09 10:45:32.682099 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-10-09 10:45:32.682110 | orchestrator | Thursday 09 October 2025 10:45:19 +0000 (0:00:00.081) 0:01:04.896 ****** 2025-10-09 10:45:32.682139 | orchestrator | 2025-10-09 10:45:32.682158 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-10-09 10:45:32.682170 | orchestrator | Thursday 09 October 2025 10:45:20 +0000 (0:00:00.075) 0:01:04.971 ****** 2025-10-09 10:45:32.682181 | orchestrator | 2025-10-09 10:45:32.682192 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-10-09 10:45:32.682203 | orchestrator | Thursday 09 October 2025 10:45:20 +0000 (0:00:00.084) 0:01:05.056 ****** 2025-10-09 10:45:32.682214 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:45:32.682225 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:45:32.682236 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:45:32.682247 | orchestrator | 2025-10-09 10:45:32.682258 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:45:32.682271 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:45:32.682284 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:45:32.682296 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:45:32.682307 | orchestrator | 2025-10-09 10:45:32.682318 | orchestrator | 2025-10-09 10:45:32.682329 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:45:32.682345 | orchestrator | Thursday 09 October 2025 10:45:30 +0000 (0:00:10.620) 0:01:15.677 ****** 2025-10-09 10:45:32.682356 | orchestrator | =============================================================================== 2025-10-09 10:45:32.682367 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.82s 2025-10-09 10:45:32.682378 | orchestrator | placement : Restart placement-api container ---------------------------- 10.62s 2025-10-09 10:45:32.682390 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.36s 2025-10-09 10:45:32.682401 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.47s 2025-10-09 10:45:32.682412 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.40s 2025-10-09 10:45:32.682423 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.82s 2025-10-09 10:45:32.682434 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.80s 2025-10-09 10:45:32.682445 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.67s 2025-10-09 10:45:32.682456 | orchestrator | placement : Creating placement databases -------------------------------- 2.82s 2025-10-09 10:45:32.682467 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.74s 2025-10-09 10:45:32.682478 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.58s 2025-10-09 10:45:32.682489 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.72s 2025-10-09 10:45:32.682500 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.56s 2025-10-09 10:45:32.682511 | orchestrator | placement : Copying over config.json files for services ----------------- 1.55s 2025-10-09 10:45:32.682530 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.51s 2025-10-09 10:45:32.682541 | orchestrator | placement : Check placement containers ---------------------------------- 1.15s 2025-10-09 10:45:32.682552 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.04s 2025-10-09 10:45:32.682563 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.01s 2025-10-09 10:45:32.682574 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.79s 2025-10-09 10:45:32.682585 | orchestrator | placement : Set placement policy file ----------------------------------- 0.58s 2025-10-09 10:45:32.682596 | orchestrator | 2025-10-09 10:45:32 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:45:32.682612 | orchestrator | 2025-10-09 10:45:32 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:45:32.682623 | orchestrator | 2025-10-09 10:45:32 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:35.724513 | orchestrator | 2025-10-09 10:45:35 | INFO  | Task fe64a778-49ad-47f7-81f4-543212a9795b is in state STARTED 2025-10-09 10:45:35.725165 | orchestrator | 2025-10-09 10:45:35 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:45:35.726897 | orchestrator | 2025-10-09 10:45:35 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:45:35.728027 | orchestrator | 2025-10-09 10:45:35 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:45:35.728266 | orchestrator | 2025-10-09 10:45:35 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:38.761359 | orchestrator | 2025-10-09 10:45:38 | INFO  | Task fe64a778-49ad-47f7-81f4-543212a9795b is in state STARTED 2025-10-09 10:45:38.761607 | orchestrator | 2025-10-09 10:45:38 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:45:38.762188 | orchestrator | 2025-10-09 10:45:38 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:45:38.762957 | orchestrator | 2025-10-09 10:45:38 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:45:38.762983 | orchestrator | 2025-10-09 10:45:38 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:41.799470 | orchestrator | 2025-10-09 10:45:41 | INFO  | Task fe64a778-49ad-47f7-81f4-543212a9795b is in state SUCCESS 2025-10-09 10:45:41.799568 | orchestrator | 2025-10-09 10:45:41 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:45:41.802270 | orchestrator | 2025-10-09 10:45:41 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:45:41.803049 | orchestrator | 2025-10-09 10:45:41 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:45:41.803805 | orchestrator | 2025-10-09 10:45:41 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:45:41.803834 | orchestrator | 2025-10-09 10:45:41 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:44.831838 | orchestrator | 2025-10-09 10:45:44 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:45:44.832175 | orchestrator | 2025-10-09 10:45:44 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:45:44.833002 | orchestrator | 2025-10-09 10:45:44 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:45:44.834223 | orchestrator | 2025-10-09 10:45:44 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:45:44.834259 | orchestrator | 2025-10-09 10:45:44 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:47.865872 | orchestrator | 2025-10-09 10:45:47 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:45:47.866461 | orchestrator | 2025-10-09 10:45:47 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:45:47.867355 | orchestrator | 2025-10-09 10:45:47 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:45:47.868377 | orchestrator | 2025-10-09 10:45:47 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:45:47.868398 | orchestrator | 2025-10-09 10:45:47 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:50.908341 | orchestrator | 2025-10-09 10:45:50 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:45:50.910560 | orchestrator | 2025-10-09 10:45:50 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:45:50.913006 | orchestrator | 2025-10-09 10:45:50 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:45:50.915512 | orchestrator | 2025-10-09 10:45:50 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:45:50.915536 | orchestrator | 2025-10-09 10:45:50 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:53.969241 | orchestrator | 2025-10-09 10:45:53 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:45:53.970275 | orchestrator | 2025-10-09 10:45:53 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:45:53.972082 | orchestrator | 2025-10-09 10:45:53 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:45:53.973349 | orchestrator | 2025-10-09 10:45:53 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:45:53.973376 | orchestrator | 2025-10-09 10:45:53 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:45:57.037296 | orchestrator | 2025-10-09 10:45:57 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:45:57.039430 | orchestrator | 2025-10-09 10:45:57 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:45:57.042251 | orchestrator | 2025-10-09 10:45:57 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:45:57.044474 | orchestrator | 2025-10-09 10:45:57 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:45:57.044770 | orchestrator | 2025-10-09 10:45:57 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:00.084856 | orchestrator | 2025-10-09 10:46:00 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:46:00.087263 | orchestrator | 2025-10-09 10:46:00 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:46:00.089619 | orchestrator | 2025-10-09 10:46:00 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:46:00.093281 | orchestrator | 2025-10-09 10:46:00 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:46:00.093314 | orchestrator | 2025-10-09 10:46:00 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:03.132798 | orchestrator | 2025-10-09 10:46:03 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:46:03.133292 | orchestrator | 2025-10-09 10:46:03 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:46:03.135574 | orchestrator | 2025-10-09 10:46:03 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:46:03.136051 | orchestrator | 2025-10-09 10:46:03 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:46:03.136432 | orchestrator | 2025-10-09 10:46:03 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:06.194163 | orchestrator | 2025-10-09 10:46:06 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:46:06.197690 | orchestrator | 2025-10-09 10:46:06 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:46:06.199220 | orchestrator | 2025-10-09 10:46:06 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:46:06.200635 | orchestrator | 2025-10-09 10:46:06 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:46:06.200676 | orchestrator | 2025-10-09 10:46:06 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:09.243829 | orchestrator | 2025-10-09 10:46:09 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:46:09.245345 | orchestrator | 2025-10-09 10:46:09 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:46:09.247685 | orchestrator | 2025-10-09 10:46:09 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:46:09.250140 | orchestrator | 2025-10-09 10:46:09 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:46:09.250172 | orchestrator | 2025-10-09 10:46:09 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:12.282343 | orchestrator | 2025-10-09 10:46:12 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:46:12.282444 | orchestrator | 2025-10-09 10:46:12 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:46:12.282632 | orchestrator | 2025-10-09 10:46:12 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:46:12.283414 | orchestrator | 2025-10-09 10:46:12 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:46:12.283438 | orchestrator | 2025-10-09 10:46:12 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:15.313161 | orchestrator | 2025-10-09 10:46:15 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:46:15.313506 | orchestrator | 2025-10-09 10:46:15 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:46:15.314384 | orchestrator | 2025-10-09 10:46:15 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:46:15.314905 | orchestrator | 2025-10-09 10:46:15 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:46:15.314932 | orchestrator | 2025-10-09 10:46:15 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:18.350339 | orchestrator | 2025-10-09 10:46:18 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:46:18.351225 | orchestrator | 2025-10-09 10:46:18 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:46:18.352337 | orchestrator | 2025-10-09 10:46:18 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:46:18.353722 | orchestrator | 2025-10-09 10:46:18 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:46:18.354496 | orchestrator | 2025-10-09 10:46:18 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:21.408962 | orchestrator | 2025-10-09 10:46:21 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:46:21.412051 | orchestrator | 2025-10-09 10:46:21 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:46:21.413791 | orchestrator | 2025-10-09 10:46:21 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:46:21.415643 | orchestrator | 2025-10-09 10:46:21 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:46:21.415683 | orchestrator | 2025-10-09 10:46:21 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:24.452066 | orchestrator | 2025-10-09 10:46:24 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:46:24.452959 | orchestrator | 2025-10-09 10:46:24 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:46:24.454436 | orchestrator | 2025-10-09 10:46:24 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:46:24.455871 | orchestrator | 2025-10-09 10:46:24 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:46:24.455901 | orchestrator | 2025-10-09 10:46:24 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:27.500701 | orchestrator | 2025-10-09 10:46:27 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:46:27.501283 | orchestrator | 2025-10-09 10:46:27 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:46:27.502731 | orchestrator | 2025-10-09 10:46:27 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:46:27.504455 | orchestrator | 2025-10-09 10:46:27 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:46:27.504479 | orchestrator | 2025-10-09 10:46:27 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:30.551839 | orchestrator | 2025-10-09 10:46:30 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:46:30.552905 | orchestrator | 2025-10-09 10:46:30 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:46:30.554677 | orchestrator | 2025-10-09 10:46:30 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:46:30.556721 | orchestrator | 2025-10-09 10:46:30 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:46:30.556770 | orchestrator | 2025-10-09 10:46:30 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:33.611267 | orchestrator | 2025-10-09 10:46:33 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:46:33.611371 | orchestrator | 2025-10-09 10:46:33 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:46:33.611386 | orchestrator | 2025-10-09 10:46:33 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:46:33.611397 | orchestrator | 2025-10-09 10:46:33 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:46:33.611408 | orchestrator | 2025-10-09 10:46:33 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:36.696027 | orchestrator | 2025-10-09 10:46:36 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:46:36.696159 | orchestrator | 2025-10-09 10:46:36 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:46:36.696172 | orchestrator | 2025-10-09 10:46:36 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:46:36.696181 | orchestrator | 2025-10-09 10:46:36 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:46:36.696189 | orchestrator | 2025-10-09 10:46:36 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:39.723896 | orchestrator | 2025-10-09 10:46:39 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:46:39.724374 | orchestrator | 2025-10-09 10:46:39 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:46:39.726444 | orchestrator | 2025-10-09 10:46:39 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:46:39.726994 | orchestrator | 2025-10-09 10:46:39 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state STARTED 2025-10-09 10:46:39.727060 | orchestrator | 2025-10-09 10:46:39 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:42.747857 | orchestrator | 2025-10-09 10:46:42 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:46:42.748289 | orchestrator | 2025-10-09 10:46:42 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:46:42.748726 | orchestrator | 2025-10-09 10:46:42 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:46:42.750671 | orchestrator | 2025-10-09 10:46:42 | INFO  | Task 080d76da-4488-4fc8-bfe5-15dbbb455bab is in state SUCCESS 2025-10-09 10:46:42.752137 | orchestrator | 2025-10-09 10:46:42.752209 | orchestrator | 2025-10-09 10:46:42.752223 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:46:42.752236 | orchestrator | 2025-10-09 10:46:42.752248 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:46:42.752260 | orchestrator | Thursday 09 October 2025 10:45:36 +0000 (0:00:00.230) 0:00:00.230 ****** 2025-10-09 10:46:42.752272 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:46:42.752285 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:46:42.752296 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:46:42.752308 | orchestrator | 2025-10-09 10:46:42.752319 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:46:42.752331 | orchestrator | Thursday 09 October 2025 10:45:36 +0000 (0:00:00.341) 0:00:00.572 ****** 2025-10-09 10:46:42.752342 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-10-09 10:46:42.752355 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-10-09 10:46:42.752366 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-10-09 10:46:42.752377 | orchestrator | 2025-10-09 10:46:42.752389 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-10-09 10:46:42.752400 | orchestrator | 2025-10-09 10:46:42.752412 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-10-09 10:46:42.752423 | orchestrator | Thursday 09 October 2025 10:45:37 +0000 (0:00:01.043) 0:00:01.615 ****** 2025-10-09 10:46:42.752434 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:46:42.752445 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:46:42.752457 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:46:42.752468 | orchestrator | 2025-10-09 10:46:42.752479 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:46:42.752492 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:46:42.752506 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:46:42.752517 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:46:42.752529 | orchestrator | 2025-10-09 10:46:42.752540 | orchestrator | 2025-10-09 10:46:42.752551 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:46:42.752563 | orchestrator | Thursday 09 October 2025 10:45:38 +0000 (0:00:01.105) 0:00:02.721 ****** 2025-10-09 10:46:42.752574 | orchestrator | =============================================================================== 2025-10-09 10:46:42.752613 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 1.11s 2025-10-09 10:46:42.752625 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.04s 2025-10-09 10:46:42.752636 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-10-09 10:46:42.752647 | orchestrator | 2025-10-09 10:46:42.752658 | orchestrator | 2025-10-09 10:46:42.752669 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:46:42.752680 | orchestrator | 2025-10-09 10:46:42.752690 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:46:42.752701 | orchestrator | Thursday 09 October 2025 10:44:42 +0000 (0:00:00.276) 0:00:00.276 ****** 2025-10-09 10:46:42.752715 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:46:42.752727 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:46:42.752739 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:46:42.752751 | orchestrator | 2025-10-09 10:46:42.752763 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:46:42.752775 | orchestrator | Thursday 09 October 2025 10:44:43 +0000 (0:00:00.332) 0:00:00.609 ****** 2025-10-09 10:46:42.752787 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-10-09 10:46:42.752799 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-10-09 10:46:42.752811 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-10-09 10:46:42.752823 | orchestrator | 2025-10-09 10:46:42.752835 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-10-09 10:46:42.752847 | orchestrator | 2025-10-09 10:46:42.752859 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-10-09 10:46:42.752871 | orchestrator | Thursday 09 October 2025 10:44:43 +0000 (0:00:00.531) 0:00:01.140 ****** 2025-10-09 10:46:42.752883 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:46:42.752896 | orchestrator | 2025-10-09 10:46:42.752908 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-10-09 10:46:42.752920 | orchestrator | Thursday 09 October 2025 10:44:44 +0000 (0:00:00.779) 0:00:01.919 ****** 2025-10-09 10:46:42.752933 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-10-09 10:46:42.752945 | orchestrator | 2025-10-09 10:46:42.752957 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-10-09 10:46:42.752969 | orchestrator | Thursday 09 October 2025 10:44:48 +0000 (0:00:03.844) 0:00:05.764 ****** 2025-10-09 10:46:42.752982 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-10-09 10:46:42.752995 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-10-09 10:46:42.753007 | orchestrator | 2025-10-09 10:46:42.753020 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-10-09 10:46:42.753032 | orchestrator | Thursday 09 October 2025 10:44:55 +0000 (0:00:07.067) 0:00:12.831 ****** 2025-10-09 10:46:42.753045 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-09 10:46:42.753057 | orchestrator | 2025-10-09 10:46:42.753069 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-10-09 10:46:42.753080 | orchestrator | Thursday 09 October 2025 10:44:59 +0000 (0:00:03.754) 0:00:16.585 ****** 2025-10-09 10:46:42.753102 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-09 10:46:42.753148 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-10-09 10:46:42.753160 | orchestrator | 2025-10-09 10:46:42.753171 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-10-09 10:46:42.753181 | orchestrator | Thursday 09 October 2025 10:45:03 +0000 (0:00:04.165) 0:00:20.751 ****** 2025-10-09 10:46:42.753192 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-09 10:46:42.753203 | orchestrator | 2025-10-09 10:46:42.753213 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-10-09 10:46:42.753233 | orchestrator | Thursday 09 October 2025 10:45:07 +0000 (0:00:03.795) 0:00:24.547 ****** 2025-10-09 10:46:42.753243 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-10-09 10:46:42.753254 | orchestrator | 2025-10-09 10:46:42.753265 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-10-09 10:46:42.753275 | orchestrator | Thursday 09 October 2025 10:45:11 +0000 (0:00:04.491) 0:00:29.039 ****** 2025-10-09 10:46:42.753286 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:46:42.753297 | orchestrator | 2025-10-09 10:46:42.753307 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-10-09 10:46:42.753318 | orchestrator | Thursday 09 October 2025 10:45:15 +0000 (0:00:03.684) 0:00:32.724 ****** 2025-10-09 10:46:42.753329 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:46:42.753339 | orchestrator | 2025-10-09 10:46:42.753350 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-10-09 10:46:42.753361 | orchestrator | Thursday 09 October 2025 10:45:19 +0000 (0:00:04.304) 0:00:37.028 ****** 2025-10-09 10:46:42.753371 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:46:42.753382 | orchestrator | 2025-10-09 10:46:42.753393 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-10-09 10:46:42.753404 | orchestrator | Thursday 09 October 2025 10:45:23 +0000 (0:00:04.123) 0:00:41.152 ****** 2025-10-09 10:46:42.753418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:46:42.753434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:46:42.753446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:46:42.753471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:46:42.753484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:46:42.753496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:46:42.753507 | orchestrator | 2025-10-09 10:46:42.753518 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-10-09 10:46:42.753529 | orchestrator | Thursday 09 October 2025 10:45:25 +0000 (0:00:01.579) 0:00:42.732 ****** 2025-10-09 10:46:42.753540 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:46:42.753551 | orchestrator | 2025-10-09 10:46:42.753562 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-10-09 10:46:42.753572 | orchestrator | Thursday 09 October 2025 10:45:25 +0000 (0:00:00.134) 0:00:42.867 ****** 2025-10-09 10:46:42.753583 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:46:42.753593 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:46:42.753604 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:46:42.753615 | orchestrator | 2025-10-09 10:46:42.753626 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-10-09 10:46:42.753637 | orchestrator | Thursday 09 October 2025 10:45:25 +0000 (0:00:00.558) 0:00:43.425 ****** 2025-10-09 10:46:42.753647 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:46:42.753658 | orchestrator | 2025-10-09 10:46:42.753669 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-10-09 10:46:42.753679 | orchestrator | Thursday 09 October 2025 10:45:26 +0000 (0:00:01.016) 0:00:44.442 ****** 2025-10-09 10:46:42.753691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:46:42.753717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:46:42.753730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:46:42.753742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:46:42.753754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:46:42.753765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:46:42.753782 | orchestrator | 2025-10-09 10:46:42.753794 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-10-09 10:46:42.753804 | orchestrator | Thursday 09 October 2025 10:45:29 +0000 (0:00:02.275) 0:00:46.717 ****** 2025-10-09 10:46:42.753815 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:46:42.753826 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:46:42.753837 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:46:42.753847 | orchestrator | 2025-10-09 10:46:42.753858 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-10-09 10:46:42.753874 | orchestrator | Thursday 09 October 2025 10:45:29 +0000 (0:00:00.364) 0:00:47.082 ****** 2025-10-09 10:46:42.753886 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:46:42.753897 | orchestrator | 2025-10-09 10:46:42.753907 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-10-09 10:46:42.753918 | orchestrator | Thursday 09 October 2025 10:45:30 +0000 (0:00:00.723) 0:00:47.806 ****** 2025-10-09 10:46:42.753930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:46:42.753942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:46:42.753953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:46:42.753971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:46:42.753989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:46:42.754002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:46:42.754013 | orchestrator | 2025-10-09 10:46:42.754073 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-10-09 10:46:42.754086 | orchestrator | Thursday 09 October 2025 10:45:32 +0000 (0:00:02.313) 0:00:50.120 ****** 2025-10-09 10:46:42.754098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:46:42.754128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:46:42.754148 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:46:42.754160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:46:42.754182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:46:42.754194 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:46:42.754205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:46:42.754217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:46:42.754229 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:46:42.754247 | orchestrator | 2025-10-09 10:46:42.754258 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-10-09 10:46:42.754269 | orchestrator | Thursday 09 October 2025 10:45:33 +0000 (0:00:00.786) 0:00:50.906 ****** 2025-10-09 10:46:42.754281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:46:42.754293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:46:42.754305 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:46:42.754323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:46:42.754335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:46:42.754347 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:46:42.754358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:46:42.754377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:46:42.754388 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:46:42.754399 | orchestrator | 2025-10-09 10:46:42.754411 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-10-09 10:46:42.754422 | orchestrator | Thursday 09 October 2025 10:45:34 +0000 (0:00:01.121) 0:00:52.028 ****** 2025-10-09 10:46:42.754440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:2025-10-09 10:46:42 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:42.754470 | orchestrator | 9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:46:42.754484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:46:42.754497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:46:42.754516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:46:42.754528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:46:42.754548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:46:42.754560 | orchestrator | 2025-10-09 10:46:42.754571 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-10-09 10:46:42.754582 | orchestrator | Thursday 09 October 2025 10:45:37 +0000 (0:00:02.559) 0:00:54.587 ****** 2025-10-09 10:46:42.754607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:46:42.754620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:46:42.754638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:46:42.754650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:46:42.754671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:46:42.754688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:46:42.754700 | orchestrator | 2025-10-09 10:46:42.754712 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-10-09 10:46:42.754734 | orchestrator | Thursday 09 October 2025 10:45:44 +0000 (0:00:07.866) 0:01:02.454 ****** 2025-10-09 10:46:42.754746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:46:42.754758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:46:42.754769 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:46:42.754781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:46:42.754802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:46:42.754814 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:46:42.754836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-10-09 10:46:42.754856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:46:42.754867 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:46:42.754879 | orchestrator | 2025-10-09 10:46:42.754890 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-10-09 10:46:42.754901 | orchestrator | Thursday 09 October 2025 10:45:46 +0000 (0:00:01.323) 0:01:03.778 ****** 2025-10-09 10:46:42.754912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:46:42.754931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:46:42.754948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-10-09 10:46:42.754971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:46:42.754982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:46:42.754994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:46:42.755005 | orchestrator | 2025-10-09 10:46:42.755016 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-10-09 10:46:42.755027 | orchestrator | Thursday 09 October 2025 10:45:49 +0000 (0:00:03.069) 0:01:06.847 ****** 2025-10-09 10:46:42.755039 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:46:42.755050 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:46:42.755061 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:46:42.755071 | orchestrator | 2025-10-09 10:46:42.755083 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-10-09 10:46:42.755093 | orchestrator | Thursday 09 October 2025 10:45:49 +0000 (0:00:00.446) 0:01:07.293 ****** 2025-10-09 10:46:42.755104 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:46:42.755169 | orchestrator | 2025-10-09 10:46:42.755181 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-10-09 10:46:42.755192 | orchestrator | Thursday 09 October 2025 10:45:52 +0000 (0:00:02.279) 0:01:09.573 ****** 2025-10-09 10:46:42.755203 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:46:42.755213 | orchestrator | 2025-10-09 10:46:42.755224 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-10-09 10:46:42.755235 | orchestrator | Thursday 09 October 2025 10:45:54 +0000 (0:00:02.402) 0:01:11.976 ****** 2025-10-09 10:46:42.755253 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:46:42.755264 | orchestrator | 2025-10-09 10:46:42.755275 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-10-09 10:46:42.755294 | orchestrator | Thursday 09 October 2025 10:46:12 +0000 (0:00:18.150) 0:01:30.127 ****** 2025-10-09 10:46:42.755305 | orchestrator | 2025-10-09 10:46:42.755316 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-10-09 10:46:42.755327 | orchestrator | Thursday 09 October 2025 10:46:12 +0000 (0:00:00.137) 0:01:30.264 ****** 2025-10-09 10:46:42.755338 | orchestrator | 2025-10-09 10:46:42.755349 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-10-09 10:46:42.755360 | orchestrator | Thursday 09 October 2025 10:46:12 +0000 (0:00:00.146) 0:01:30.411 ****** 2025-10-09 10:46:42.755371 | orchestrator | 2025-10-09 10:46:42.755381 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-10-09 10:46:42.755392 | orchestrator | Thursday 09 October 2025 10:46:13 +0000 (0:00:00.207) 0:01:30.619 ****** 2025-10-09 10:46:42.755403 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:46:42.755413 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:46:42.755424 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:46:42.755434 | orchestrator | 2025-10-09 10:46:42.755445 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-10-09 10:46:42.755461 | orchestrator | Thursday 09 October 2025 10:46:31 +0000 (0:00:18.053) 0:01:48.672 ****** 2025-10-09 10:46:42.755472 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:46:42.755482 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:46:42.755493 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:46:42.755504 | orchestrator | 2025-10-09 10:46:42.755515 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:46:42.755526 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-10-09 10:46:42.755537 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:46:42.755548 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:46:42.755559 | orchestrator | 2025-10-09 10:46:42.755570 | orchestrator | 2025-10-09 10:46:42.755580 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:46:42.755591 | orchestrator | Thursday 09 October 2025 10:46:41 +0000 (0:00:10.733) 0:01:59.406 ****** 2025-10-09 10:46:42.755602 | orchestrator | =============================================================================== 2025-10-09 10:46:42.755612 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 18.15s 2025-10-09 10:46:42.755623 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.05s 2025-10-09 10:46:42.755634 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.73s 2025-10-09 10:46:42.755644 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 7.87s 2025-10-09 10:46:42.755655 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.07s 2025-10-09 10:46:42.755666 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.49s 2025-10-09 10:46:42.755677 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.30s 2025-10-09 10:46:42.755688 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.17s 2025-10-09 10:46:42.755698 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.12s 2025-10-09 10:46:42.755708 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.84s 2025-10-09 10:46:42.755717 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.80s 2025-10-09 10:46:42.755727 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.75s 2025-10-09 10:46:42.755736 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.68s 2025-10-09 10:46:42.755746 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.07s 2025-10-09 10:46:42.755761 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.56s 2025-10-09 10:46:42.755770 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.40s 2025-10-09 10:46:42.755780 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.31s 2025-10-09 10:46:42.755790 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.28s 2025-10-09 10:46:42.755799 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.28s 2025-10-09 10:46:42.755809 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.58s 2025-10-09 10:46:45.778470 | orchestrator | 2025-10-09 10:46:45 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:46:45.778581 | orchestrator | 2025-10-09 10:46:45 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:46:45.778937 | orchestrator | 2025-10-09 10:46:45 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:46:45.778958 | orchestrator | 2025-10-09 10:46:45 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:48.820921 | orchestrator | 2025-10-09 10:46:48 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:46:48.821024 | orchestrator | 2025-10-09 10:46:48 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:46:48.822628 | orchestrator | 2025-10-09 10:46:48 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:46:48.822808 | orchestrator | 2025-10-09 10:46:48 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:51.876442 | orchestrator | 2025-10-09 10:46:51 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:46:51.878966 | orchestrator | 2025-10-09 10:46:51 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:46:51.881505 | orchestrator | 2025-10-09 10:46:51 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:46:51.881536 | orchestrator | 2025-10-09 10:46:51 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:54.928782 | orchestrator | 2025-10-09 10:46:54 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:46:54.931082 | orchestrator | 2025-10-09 10:46:54 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:46:54.933883 | orchestrator | 2025-10-09 10:46:54 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:46:54.934168 | orchestrator | 2025-10-09 10:46:54 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:46:57.978229 | orchestrator | 2025-10-09 10:46:57 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:46:57.979696 | orchestrator | 2025-10-09 10:46:57 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:46:57.982599 | orchestrator | 2025-10-09 10:46:57 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:46:57.982788 | orchestrator | 2025-10-09 10:46:57 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:01.023994 | orchestrator | 2025-10-09 10:47:01 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:47:01.026600 | orchestrator | 2025-10-09 10:47:01 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:47:01.029049 | orchestrator | 2025-10-09 10:47:01 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:47:01.029354 | orchestrator | 2025-10-09 10:47:01 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:04.072560 | orchestrator | 2025-10-09 10:47:04 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:47:04.074525 | orchestrator | 2025-10-09 10:47:04 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:47:04.076341 | orchestrator | 2025-10-09 10:47:04 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:47:04.076363 | orchestrator | 2025-10-09 10:47:04 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:07.145472 | orchestrator | 2025-10-09 10:47:07 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:47:07.147405 | orchestrator | 2025-10-09 10:47:07 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:47:07.148928 | orchestrator | 2025-10-09 10:47:07 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:47:07.149069 | orchestrator | 2025-10-09 10:47:07 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:10.194402 | orchestrator | 2025-10-09 10:47:10 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:47:10.196244 | orchestrator | 2025-10-09 10:47:10 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:47:10.197915 | orchestrator | 2025-10-09 10:47:10 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:47:10.197935 | orchestrator | 2025-10-09 10:47:10 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:13.244611 | orchestrator | 2025-10-09 10:47:13 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:47:13.247442 | orchestrator | 2025-10-09 10:47:13 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:47:13.249720 | orchestrator | 2025-10-09 10:47:13 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:47:13.249916 | orchestrator | 2025-10-09 10:47:13 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:16.302573 | orchestrator | 2025-10-09 10:47:16 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:47:16.304740 | orchestrator | 2025-10-09 10:47:16 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:47:16.307548 | orchestrator | 2025-10-09 10:47:16 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:47:16.307573 | orchestrator | 2025-10-09 10:47:16 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:19.347661 | orchestrator | 2025-10-09 10:47:19 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:47:19.350182 | orchestrator | 2025-10-09 10:47:19 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:47:19.352732 | orchestrator | 2025-10-09 10:47:19 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:47:19.352759 | orchestrator | 2025-10-09 10:47:19 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:22.409481 | orchestrator | 2025-10-09 10:47:22 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:47:22.411077 | orchestrator | 2025-10-09 10:47:22 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:47:22.412551 | orchestrator | 2025-10-09 10:47:22 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state STARTED 2025-10-09 10:47:22.412578 | orchestrator | 2025-10-09 10:47:22 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:25.458847 | orchestrator | 2025-10-09 10:47:25 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:47:25.459696 | orchestrator | 2025-10-09 10:47:25 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:47:25.465601 | orchestrator | 2025-10-09 10:47:25 | INFO  | Task 9f8c697b-d77d-436e-91ff-19456c414685 is in state SUCCESS 2025-10-09 10:47:25.468454 | orchestrator | 2025-10-09 10:47:25.468524 | orchestrator | 2025-10-09 10:47:25.468540 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:47:25.468553 | orchestrator | 2025-10-09 10:47:25.468564 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-10-09 10:47:25.468576 | orchestrator | Thursday 09 October 2025 10:37:38 +0000 (0:00:00.292) 0:00:00.292 ****** 2025-10-09 10:47:25.468587 | orchestrator | changed: [testbed-manager] 2025-10-09 10:47:25.468598 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:25.468609 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:47:25.468620 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:47:25.468631 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:47:25.468657 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:47:25.468668 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:47:25.468689 | orchestrator | 2025-10-09 10:47:25.468700 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:47:25.468711 | orchestrator | Thursday 09 October 2025 10:37:39 +0000 (0:00:01.010) 0:00:01.302 ****** 2025-10-09 10:47:25.468722 | orchestrator | changed: [testbed-manager] 2025-10-09 10:47:25.468733 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:25.468744 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:47:25.468755 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:47:25.468765 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:47:25.468776 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:47:25.468787 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:47:25.468798 | orchestrator | 2025-10-09 10:47:25.468809 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:47:25.468820 | orchestrator | Thursday 09 October 2025 10:37:40 +0000 (0:00:00.752) 0:00:02.055 ****** 2025-10-09 10:47:25.468831 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-10-09 10:47:25.468842 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-10-09 10:47:25.468853 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-10-09 10:47:25.468863 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-10-09 10:47:25.468874 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-10-09 10:47:25.468885 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-10-09 10:47:25.468897 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-10-09 10:47:25.468915 | orchestrator | 2025-10-09 10:47:25.468935 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-10-09 10:47:25.468954 | orchestrator | 2025-10-09 10:47:25.468973 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-10-09 10:47:25.468992 | orchestrator | Thursday 09 October 2025 10:37:41 +0000 (0:00:01.018) 0:00:03.074 ****** 2025-10-09 10:47:25.469012 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:47:25.469031 | orchestrator | 2025-10-09 10:47:25.469045 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-10-09 10:47:25.469058 | orchestrator | Thursday 09 October 2025 10:37:42 +0000 (0:00:00.827) 0:00:03.902 ****** 2025-10-09 10:47:25.469070 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-10-09 10:47:25.469082 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-10-09 10:47:25.469095 | orchestrator | 2025-10-09 10:47:25.469138 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-10-09 10:47:25.469151 | orchestrator | Thursday 09 October 2025 10:37:46 +0000 (0:00:03.765) 0:00:07.668 ****** 2025-10-09 10:47:25.469188 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-09 10:47:25.469202 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-10-09 10:47:25.469214 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:25.469226 | orchestrator | 2025-10-09 10:47:25.469239 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-10-09 10:47:25.469251 | orchestrator | Thursday 09 October 2025 10:37:50 +0000 (0:00:04.141) 0:00:11.809 ****** 2025-10-09 10:47:25.469264 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:25.469276 | orchestrator | 2025-10-09 10:47:25.469287 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-10-09 10:47:25.469300 | orchestrator | Thursday 09 October 2025 10:37:51 +0000 (0:00:01.377) 0:00:13.186 ****** 2025-10-09 10:47:25.469312 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:25.469324 | orchestrator | 2025-10-09 10:47:25.469336 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-10-09 10:47:25.469347 | orchestrator | Thursday 09 October 2025 10:37:53 +0000 (0:00:02.270) 0:00:15.457 ****** 2025-10-09 10:47:25.469358 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:25.469369 | orchestrator | 2025-10-09 10:47:25.469380 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-10-09 10:47:25.469391 | orchestrator | Thursday 09 October 2025 10:37:57 +0000 (0:00:03.532) 0:00:18.989 ****** 2025-10-09 10:47:25.469401 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.469412 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.469423 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.469434 | orchestrator | 2025-10-09 10:47:25.469445 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-10-09 10:47:25.469471 | orchestrator | Thursday 09 October 2025 10:37:58 +0000 (0:00:00.579) 0:00:19.569 ****** 2025-10-09 10:47:25.469483 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:47:25.469494 | orchestrator | 2025-10-09 10:47:25.469506 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-10-09 10:47:25.469517 | orchestrator | Thursday 09 October 2025 10:38:28 +0000 (0:00:30.734) 0:00:50.304 ****** 2025-10-09 10:47:25.469527 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:25.469538 | orchestrator | 2025-10-09 10:47:25.469549 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-10-09 10:47:25.469560 | orchestrator | Thursday 09 October 2025 10:38:45 +0000 (0:00:16.659) 0:01:06.963 ****** 2025-10-09 10:47:25.469571 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:47:25.469582 | orchestrator | 2025-10-09 10:47:25.469593 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-10-09 10:47:25.469604 | orchestrator | Thursday 09 October 2025 10:38:58 +0000 (0:00:12.899) 0:01:19.863 ****** 2025-10-09 10:47:25.469629 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:47:25.469640 | orchestrator | 2025-10-09 10:47:25.469651 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-10-09 10:47:25.469662 | orchestrator | Thursday 09 October 2025 10:38:59 +0000 (0:00:01.182) 0:01:21.046 ****** 2025-10-09 10:47:25.469673 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.469684 | orchestrator | 2025-10-09 10:47:25.469694 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-10-09 10:47:25.469705 | orchestrator | Thursday 09 October 2025 10:39:00 +0000 (0:00:00.523) 0:01:21.569 ****** 2025-10-09 10:47:25.469717 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:47:25.469728 | orchestrator | 2025-10-09 10:47:25.469739 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-10-09 10:47:25.469750 | orchestrator | Thursday 09 October 2025 10:39:00 +0000 (0:00:00.555) 0:01:22.125 ****** 2025-10-09 10:47:25.469760 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:47:25.469771 | orchestrator | 2025-10-09 10:47:25.469782 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-10-09 10:47:25.469793 | orchestrator | Thursday 09 October 2025 10:39:20 +0000 (0:00:19.884) 0:01:42.010 ****** 2025-10-09 10:47:25.469811 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.469822 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.469833 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.469844 | orchestrator | 2025-10-09 10:47:25.469855 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-10-09 10:47:25.469866 | orchestrator | 2025-10-09 10:47:25.469877 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-10-09 10:47:25.469887 | orchestrator | Thursday 09 October 2025 10:39:20 +0000 (0:00:00.416) 0:01:42.427 ****** 2025-10-09 10:47:25.469898 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:47:25.469909 | orchestrator | 2025-10-09 10:47:25.469920 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-10-09 10:47:25.469931 | orchestrator | Thursday 09 October 2025 10:39:21 +0000 (0:00:00.697) 0:01:43.124 ****** 2025-10-09 10:47:25.469942 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.469953 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.469964 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:25.469975 | orchestrator | 2025-10-09 10:47:25.469986 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-10-09 10:47:25.469996 | orchestrator | Thursday 09 October 2025 10:39:23 +0000 (0:00:02.213) 0:01:45.338 ****** 2025-10-09 10:47:25.470007 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.470075 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.470088 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:25.470099 | orchestrator | 2025-10-09 10:47:25.470127 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-10-09 10:47:25.470138 | orchestrator | Thursday 09 October 2025 10:39:26 +0000 (0:00:02.324) 0:01:47.663 ****** 2025-10-09 10:47:25.470149 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.470160 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.470171 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.470182 | orchestrator | 2025-10-09 10:47:25.470193 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-10-09 10:47:25.470204 | orchestrator | Thursday 09 October 2025 10:39:26 +0000 (0:00:00.428) 0:01:48.092 ****** 2025-10-09 10:47:25.470215 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-10-09 10:47:25.470226 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.470237 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-10-09 10:47:25.470248 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.470259 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-10-09 10:47:25.470270 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-10-09 10:47:25.470280 | orchestrator | 2025-10-09 10:47:25.470291 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-10-09 10:47:25.470302 | orchestrator | Thursday 09 October 2025 10:39:36 +0000 (0:00:09.770) 0:01:57.862 ****** 2025-10-09 10:47:25.470313 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.470324 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.470335 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.470345 | orchestrator | 2025-10-09 10:47:25.470357 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-10-09 10:47:25.470368 | orchestrator | Thursday 09 October 2025 10:39:37 +0000 (0:00:01.281) 0:01:59.144 ****** 2025-10-09 10:47:25.470378 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-10-09 10:47:25.470389 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.470400 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-10-09 10:47:25.470411 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.470422 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-10-09 10:47:25.470433 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.470443 | orchestrator | 2025-10-09 10:47:25.470454 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-10-09 10:47:25.470472 | orchestrator | Thursday 09 October 2025 10:39:39 +0000 (0:00:01.724) 0:02:00.868 ****** 2025-10-09 10:47:25.470489 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.470501 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.470511 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:25.470522 | orchestrator | 2025-10-09 10:47:25.470533 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-10-09 10:47:25.470544 | orchestrator | Thursday 09 October 2025 10:39:40 +0000 (0:00:00.921) 0:02:01.789 ****** 2025-10-09 10:47:25.470555 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.470566 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.470577 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:25.470588 | orchestrator | 2025-10-09 10:47:25.470599 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-10-09 10:47:25.470610 | orchestrator | Thursday 09 October 2025 10:39:41 +0000 (0:00:01.087) 0:02:02.876 ****** 2025-10-09 10:47:25.470621 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.470632 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.470668 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:25.470688 | orchestrator | 2025-10-09 10:47:25.470707 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-10-09 10:47:25.470725 | orchestrator | Thursday 09 October 2025 10:39:43 +0000 (0:00:02.332) 0:02:05.209 ****** 2025-10-09 10:47:25.470739 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.470749 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.470760 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:47:25.470771 | orchestrator | 2025-10-09 10:47:25.470782 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-10-09 10:47:25.470793 | orchestrator | Thursday 09 October 2025 10:40:06 +0000 (0:00:22.700) 0:02:27.910 ****** 2025-10-09 10:47:25.470804 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.470815 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.470826 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:47:25.470837 | orchestrator | 2025-10-09 10:47:25.470847 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-10-09 10:47:25.470858 | orchestrator | Thursday 09 October 2025 10:40:20 +0000 (0:00:14.273) 0:02:42.183 ****** 2025-10-09 10:47:25.470869 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:47:25.470880 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.470891 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.470901 | orchestrator | 2025-10-09 10:47:25.470912 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-10-09 10:47:25.470923 | orchestrator | Thursday 09 October 2025 10:40:22 +0000 (0:00:01.382) 0:02:43.566 ****** 2025-10-09 10:47:25.470934 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.470945 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.470955 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:25.470966 | orchestrator | 2025-10-09 10:47:25.470977 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-10-09 10:47:25.470987 | orchestrator | Thursday 09 October 2025 10:40:35 +0000 (0:00:13.565) 0:02:57.131 ****** 2025-10-09 10:47:25.470998 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.471009 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.471020 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.471030 | orchestrator | 2025-10-09 10:47:25.471041 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-10-09 10:47:25.471052 | orchestrator | Thursday 09 October 2025 10:40:36 +0000 (0:00:01.122) 0:02:58.254 ****** 2025-10-09 10:47:25.471063 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.471074 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.471084 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.471095 | orchestrator | 2025-10-09 10:47:25.471166 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-10-09 10:47:25.471179 | orchestrator | 2025-10-09 10:47:25.471199 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-10-09 10:47:25.471211 | orchestrator | Thursday 09 October 2025 10:40:37 +0000 (0:00:00.599) 0:02:58.853 ****** 2025-10-09 10:47:25.471222 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:47:25.471235 | orchestrator | 2025-10-09 10:47:25.471246 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-10-09 10:47:25.471257 | orchestrator | Thursday 09 October 2025 10:40:37 +0000 (0:00:00.583) 0:02:59.437 ****** 2025-10-09 10:47:25.471268 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-10-09 10:47:25.471278 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-10-09 10:47:25.471289 | orchestrator | 2025-10-09 10:47:25.471300 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-10-09 10:47:25.471311 | orchestrator | Thursday 09 October 2025 10:40:41 +0000 (0:00:04.022) 0:03:03.460 ****** 2025-10-09 10:47:25.471321 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-10-09 10:47:25.471334 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-10-09 10:47:25.471345 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-10-09 10:47:25.471356 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-10-09 10:47:25.471367 | orchestrator | 2025-10-09 10:47:25.471379 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-10-09 10:47:25.471390 | orchestrator | Thursday 09 October 2025 10:40:48 +0000 (0:00:06.772) 0:03:10.233 ****** 2025-10-09 10:47:25.471401 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-09 10:47:25.471412 | orchestrator | 2025-10-09 10:47:25.471423 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-10-09 10:47:25.471432 | orchestrator | Thursday 09 October 2025 10:40:52 +0000 (0:00:03.441) 0:03:13.674 ****** 2025-10-09 10:47:25.471442 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-09 10:47:25.471457 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-10-09 10:47:25.471467 | orchestrator | 2025-10-09 10:47:25.471477 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-10-09 10:47:25.471487 | orchestrator | Thursday 09 October 2025 10:40:56 +0000 (0:00:04.266) 0:03:17.941 ****** 2025-10-09 10:47:25.471497 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-09 10:47:25.471507 | orchestrator | 2025-10-09 10:47:25.471516 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-10-09 10:47:25.471526 | orchestrator | Thursday 09 October 2025 10:41:00 +0000 (0:00:04.013) 0:03:21.954 ****** 2025-10-09 10:47:25.471536 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-10-09 10:47:25.471546 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-10-09 10:47:25.471555 | orchestrator | 2025-10-09 10:47:25.471565 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-10-09 10:47:25.471585 | orchestrator | Thursday 09 October 2025 10:41:09 +0000 (0:00:09.442) 0:03:31.397 ****** 2025-10-09 10:47:25.471601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:47:25.471624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.471636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:47:25.471661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:47:25.471673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.471692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.471703 | orchestrator | 2025-10-09 10:47:25.471713 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-10-09 10:47:25.471723 | orchestrator | Thursday 09 October 2025 10:41:11 +0000 (0:00:02.013) 0:03:33.410 ****** 2025-10-09 10:47:25.471733 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.471743 | orchestrator | 2025-10-09 10:47:25.471753 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-10-09 10:47:25.471763 | orchestrator | Thursday 09 October 2025 10:41:12 +0000 (0:00:00.179) 0:03:33.589 ****** 2025-10-09 10:47:25.471773 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.471783 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.471793 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.471802 | orchestrator | 2025-10-09 10:47:25.471812 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-10-09 10:47:25.471822 | orchestrator | Thursday 09 October 2025 10:41:12 +0000 (0:00:00.720) 0:03:34.309 ****** 2025-10-09 10:47:25.471832 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:47:25.471842 | orchestrator | 2025-10-09 10:47:25.471851 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-10-09 10:47:25.471861 | orchestrator | Thursday 09 October 2025 10:41:14 +0000 (0:00:01.784) 0:03:36.094 ****** 2025-10-09 10:47:25.471871 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.471881 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.471890 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.471900 | orchestrator | 2025-10-09 10:47:25.471910 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-10-09 10:47:25.471920 | orchestrator | Thursday 09 October 2025 10:41:15 +0000 (0:00:00.565) 0:03:36.659 ****** 2025-10-09 10:47:25.471929 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:47:25.471939 | orchestrator | 2025-10-09 10:47:25.471949 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-10-09 10:47:25.471959 | orchestrator | Thursday 09 October 2025 10:41:15 +0000 (0:00:00.789) 0:03:37.449 ****** 2025-10-09 10:47:25.471981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:47:25.472006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:47:25.472019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:47:25.472031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.472045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.472062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.472079 | orchestrator | 2025-10-09 10:47:25.472089 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-10-09 10:47:25.472099 | orchestrator | Thursday 09 October 2025 10:41:19 +0000 (0:00:03.122) 0:03:40.572 ****** 2025-10-09 10:47:25.472125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:47:25.472136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.472147 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.472158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:47:25.472173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.472190 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.472208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:47:25.472220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.472231 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.472241 | orchestrator | 2025-10-09 10:47:25.472251 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-10-09 10:47:25.472261 | orchestrator | Thursday 09 October 2025 10:41:20 +0000 (0:00:01.466) 0:03:42.038 ****** 2025-10-09 10:47:25.472271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:47:25.472287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.472303 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.472322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:47:25.472333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.472343 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.472354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:47:25.472364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.472381 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.472391 | orchestrator | 2025-10-09 10:47:25.472401 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-10-09 10:47:25.472411 | orchestrator | Thursday 09 October 2025 10:41:21 +0000 (0:00:00.714) 0:03:42.752 ****** 2025-10-09 10:47:25.472433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:47:25.472445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:47:25.472457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:47:25.472481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.472498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.472509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.472520 | orchestrator | 2025-10-09 10:47:25.472530 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-10-09 10:47:25.472540 | orchestrator | Thursday 09 October 2025 10:41:24 +0000 (0:00:02.935) 0:03:45.688 ****** 2025-10-09 10:47:25.472550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:47:25.472561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:47:25.472591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:47:25.472603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.472613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.472624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.472635 | orchestrator | 2025-10-09 10:47:25.472645 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-10-09 10:47:25.472655 | orchestrator | Thursday 09 October 2025 10:41:35 +0000 (0:00:11.327) 0:03:57.015 ****** 2025-10-09 10:47:25.472669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:47:25.472694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.472705 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.472715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:47:25.472726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.472736 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.472747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-10-09 10:47:25.472766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.472777 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.472787 | orchestrator | 2025-10-09 10:47:25.472797 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-10-09 10:47:25.472807 | orchestrator | Thursday 09 October 2025 10:41:36 +0000 (0:00:00.670) 0:03:57.686 ****** 2025-10-09 10:47:25.472817 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:25.472827 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:47:25.472837 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:47:25.472847 | orchestrator | 2025-10-09 10:47:25.472863 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-10-09 10:47:25.472874 | orchestrator | Thursday 09 October 2025 10:41:37 +0000 (0:00:01.666) 0:03:59.352 ****** 2025-10-09 10:47:25.472884 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.472894 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.472903 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.472913 | orchestrator | 2025-10-09 10:47:25.472923 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-10-09 10:47:25.472933 | orchestrator | Thursday 09 October 2025 10:41:38 +0000 (0:00:00.848) 0:04:00.201 ****** 2025-10-09 10:47:25.472943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:47:25.472960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:47:25.472970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.472993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-10-09 10:47:25.473004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.473015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.473030 | orchestrator | 2025-10-09 10:47:25.473040 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-10-09 10:47:25.473050 | orchestrator | Thursday 09 October 2025 10:41:41 +0000 (0:00:02.995) 0:04:03.196 ****** 2025-10-09 10:47:25.473060 | orchestrator | 2025-10-09 10:47:25.473070 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-10-09 10:47:25.473080 | orchestrator | Thursday 09 October 2025 10:41:42 +0000 (0:00:00.595) 0:04:03.792 ****** 2025-10-09 10:47:25.473090 | orchestrator | 2025-10-09 10:47:25.473100 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-10-09 10:47:25.473125 | orchestrator | Thursday 09 October 2025 10:41:42 +0000 (0:00:00.426) 0:04:04.218 ****** 2025-10-09 10:47:25.473135 | orchestrator | 2025-10-09 10:47:25.473145 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-10-09 10:47:25.473155 | orchestrator | Thursday 09 October 2025 10:41:42 +0000 (0:00:00.213) 0:04:04.432 ****** 2025-10-09 10:47:25.473165 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:25.473175 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:47:25.473185 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:47:25.473195 | orchestrator | 2025-10-09 10:47:25.473205 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-10-09 10:47:25.473215 | orchestrator | Thursday 09 October 2025 10:42:04 +0000 (0:00:21.480) 0:04:25.913 ****** 2025-10-09 10:47:25.473225 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:25.473235 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:47:25.473245 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:47:25.473255 | orchestrator | 2025-10-09 10:47:25.473265 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-10-09 10:47:25.473275 | orchestrator | 2025-10-09 10:47:25.473285 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-10-09 10:47:25.473295 | orchestrator | Thursday 09 October 2025 10:42:13 +0000 (0:00:08.956) 0:04:34.869 ****** 2025-10-09 10:47:25.473305 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:47:25.473315 | orchestrator | 2025-10-09 10:47:25.473325 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-10-09 10:47:25.473335 | orchestrator | Thursday 09 October 2025 10:42:15 +0000 (0:00:02.034) 0:04:36.904 ****** 2025-10-09 10:47:25.473345 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:47:25.473355 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:47:25.473368 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:47:25.473379 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.473388 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.473398 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.473408 | orchestrator | 2025-10-09 10:47:25.473418 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-10-09 10:47:25.473428 | orchestrator | Thursday 09 October 2025 10:42:17 +0000 (0:00:01.713) 0:04:38.617 ****** 2025-10-09 10:47:25.473438 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.473448 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.473457 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.473468 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:47:25.473477 | orchestrator | 2025-10-09 10:47:25.473488 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-10-09 10:47:25.473505 | orchestrator | Thursday 09 October 2025 10:42:19 +0000 (0:00:02.859) 0:04:41.476 ****** 2025-10-09 10:47:25.473515 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-10-09 10:47:25.473525 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-10-09 10:47:25.473541 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-10-09 10:47:25.473551 | orchestrator | 2025-10-09 10:47:25.473561 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-10-09 10:47:25.473570 | orchestrator | Thursday 09 October 2025 10:42:22 +0000 (0:00:02.038) 0:04:43.514 ****** 2025-10-09 10:47:25.473580 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-10-09 10:47:25.473590 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-10-09 10:47:25.473600 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-10-09 10:47:25.473610 | orchestrator | 2025-10-09 10:47:25.473620 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-10-09 10:47:25.473629 | orchestrator | Thursday 09 October 2025 10:42:23 +0000 (0:00:01.527) 0:04:45.042 ****** 2025-10-09 10:47:25.473639 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-10-09 10:47:25.473649 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:47:25.473659 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-10-09 10:47:25.473669 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:47:25.473679 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-10-09 10:47:25.473689 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:47:25.473698 | orchestrator | 2025-10-09 10:47:25.473708 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-10-09 10:47:25.473718 | orchestrator | Thursday 09 October 2025 10:42:25 +0000 (0:00:02.036) 0:04:47.078 ****** 2025-10-09 10:47:25.473728 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-09 10:47:25.473738 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-09 10:47:25.473748 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.473758 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-09 10:47:25.473768 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-09 10:47:25.473778 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.473787 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-10-09 10:47:25.473797 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-10-09 10:47:25.473807 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.473817 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-10-09 10:47:25.473827 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-10-09 10:47:25.473837 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-10-09 10:47:25.473846 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-10-09 10:47:25.473856 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-10-09 10:47:25.473866 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-10-09 10:47:25.473876 | orchestrator | 2025-10-09 10:47:25.473886 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-10-09 10:47:25.473896 | orchestrator | Thursday 09 October 2025 10:42:27 +0000 (0:00:01.667) 0:04:48.746 ****** 2025-10-09 10:47:25.473906 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.473916 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:47:25.473926 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.473935 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.473945 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:47:25.473955 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:47:25.473965 | orchestrator | 2025-10-09 10:47:25.473975 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-10-09 10:47:25.473985 | orchestrator | Thursday 09 October 2025 10:42:29 +0000 (0:00:01.954) 0:04:50.701 ****** 2025-10-09 10:47:25.473994 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.474011 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.474060 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.474070 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:47:25.474081 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:47:25.474090 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:47:25.474100 | orchestrator | 2025-10-09 10:47:25.474128 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-10-09 10:47:25.474139 | orchestrator | Thursday 09 October 2025 10:42:31 +0000 (0:00:02.114) 0:04:52.816 ****** 2025-10-09 10:47:25.474155 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474196 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474228 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474274 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474295 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474306 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474385 | orchestrator | 2025-10-09 10:47:25.474395 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-10-09 10:47:25.474405 | orchestrator | Thursday 09 October 2025 10:42:35 +0000 (0:00:04.428) 0:04:57.245 ****** 2025-10-09 10:47:25.474415 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:47:25.474426 | orchestrator | 2025-10-09 10:47:25.474436 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-10-09 10:47:25.474445 | orchestrator | Thursday 09 October 2025 10:42:38 +0000 (0:00:02.961) 0:05:00.206 ****** 2025-10-09 10:47:25.474455 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474472 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474496 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474507 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474543 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474554 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474586 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474618 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474634 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.474658 | orchestrator | 2025-10-09 10:47:25.474668 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-10-09 10:47:25.474678 | orchestrator | Thursday 09 October 2025 10:42:44 +0000 (0:00:06.210) 0:05:06.417 ****** 2025-10-09 10:47:25.474703 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:47:25.474716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:47:25.474726 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.474742 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:47:25.474753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-09 10:47:25.474763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.474773 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.474794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:47:25.474805 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:47:25.474815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.474832 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:47:25.474842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:47:25.474852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:47:25.474867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.474877 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:47:25.474896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-09 10:47:25.474906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.474916 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.474927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-09 10:47:25.474944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.474954 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.474964 | orchestrator | 2025-10-09 10:47:25.474974 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-10-09 10:47:25.474984 | orchestrator | Thursday 09 October 2025 10:42:47 +0000 (0:00:02.734) 0:05:09.152 ****** 2025-10-09 10:47:25.474994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:47:25.475009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:47:25.475026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:47:25.475037 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.475053 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:47:25.475064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:47:25.475074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.475084 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:47:25.475099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:47:25.475141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:47:25.475152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.475170 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:47:25.475180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-09 10:47:25.475190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.475200 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.475210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-09 10:47:25.475220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-09 10:47:25.475230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.475246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.475263 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.475273 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.475283 | orchestrator | 2025-10-09 10:47:25.475292 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-10-09 10:47:25.475302 | orchestrator | Thursday 09 October 2025 10:42:50 +0000 (0:00:02.893) 0:05:12.045 ****** 2025-10-09 10:47:25.475312 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.475322 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.475331 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.475341 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-10-09 10:47:25.475351 | orchestrator | 2025-10-09 10:47:25.475361 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-10-09 10:47:25.475370 | orchestrator | Thursday 09 October 2025 10:42:52 +0000 (0:00:01.616) 0:05:13.662 ****** 2025-10-09 10:47:25.475380 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-10-09 10:47:25.475418 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-10-09 10:47:25.475428 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-10-09 10:47:25.475438 | orchestrator | 2025-10-09 10:47:25.475447 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-10-09 10:47:25.475457 | orchestrator | Thursday 09 October 2025 10:42:53 +0000 (0:00:01.053) 0:05:14.715 ****** 2025-10-09 10:47:25.475467 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-10-09 10:47:25.475477 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-10-09 10:47:25.475486 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-10-09 10:47:25.475496 | orchestrator | 2025-10-09 10:47:25.475506 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-10-09 10:47:25.475515 | orchestrator | Thursday 09 October 2025 10:42:54 +0000 (0:00:01.663) 0:05:16.379 ****** 2025-10-09 10:47:25.475525 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:47:25.475535 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:47:25.475544 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:47:25.475554 | orchestrator | 2025-10-09 10:47:25.475564 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-10-09 10:47:25.475573 | orchestrator | Thursday 09 October 2025 10:42:55 +0000 (0:00:00.550) 0:05:16.930 ****** 2025-10-09 10:47:25.475583 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:47:25.475593 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:47:25.475603 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:47:25.475612 | orchestrator | 2025-10-09 10:47:25.475622 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-10-09 10:47:25.475632 | orchestrator | Thursday 09 October 2025 10:42:57 +0000 (0:00:01.676) 0:05:18.606 ****** 2025-10-09 10:47:25.475642 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-10-09 10:47:25.475651 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-10-09 10:47:25.475661 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-10-09 10:47:25.475671 | orchestrator | 2025-10-09 10:47:25.475680 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-10-09 10:47:25.475690 | orchestrator | Thursday 09 October 2025 10:42:58 +0000 (0:00:01.344) 0:05:19.951 ****** 2025-10-09 10:47:25.475700 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-10-09 10:47:25.475709 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-10-09 10:47:25.475719 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-10-09 10:47:25.475729 | orchestrator | 2025-10-09 10:47:25.475738 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-10-09 10:47:25.475748 | orchestrator | Thursday 09 October 2025 10:42:59 +0000 (0:00:01.502) 0:05:21.454 ****** 2025-10-09 10:47:25.475758 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-10-09 10:47:25.475775 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-10-09 10:47:25.475785 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-10-09 10:47:25.475795 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-10-09 10:47:25.475804 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-10-09 10:47:25.475814 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-10-09 10:47:25.475823 | orchestrator | 2025-10-09 10:47:25.475833 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-10-09 10:47:25.475843 | orchestrator | Thursday 09 October 2025 10:43:05 +0000 (0:00:05.947) 0:05:27.402 ****** 2025-10-09 10:47:25.475852 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:47:25.475866 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:47:25.475876 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:47:25.475886 | orchestrator | 2025-10-09 10:47:25.475895 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-10-09 10:47:25.475905 | orchestrator | Thursday 09 October 2025 10:43:06 +0000 (0:00:00.427) 0:05:27.829 ****** 2025-10-09 10:47:25.475915 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:47:25.475924 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:47:25.475934 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:47:25.475943 | orchestrator | 2025-10-09 10:47:25.475953 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-10-09 10:47:25.475963 | orchestrator | Thursday 09 October 2025 10:43:06 +0000 (0:00:00.288) 0:05:28.118 ****** 2025-10-09 10:47:25.475973 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:47:25.475982 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:47:25.475992 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:47:25.476002 | orchestrator | 2025-10-09 10:47:25.476017 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-10-09 10:47:25.476027 | orchestrator | Thursday 09 October 2025 10:43:07 +0000 (0:00:01.194) 0:05:29.312 ****** 2025-10-09 10:47:25.476037 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-10-09 10:47:25.476047 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-10-09 10:47:25.476057 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-10-09 10:47:25.476067 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-10-09 10:47:25.476077 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-10-09 10:47:25.476086 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-10-09 10:47:25.476096 | orchestrator | 2025-10-09 10:47:25.476121 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-10-09 10:47:25.476131 | orchestrator | Thursday 09 October 2025 10:43:11 +0000 (0:00:03.245) 0:05:32.558 ****** 2025-10-09 10:47:25.476141 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-10-09 10:47:25.476151 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-10-09 10:47:25.476160 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-10-09 10:47:25.476170 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-10-09 10:47:25.476179 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-10-09 10:47:25.476189 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:47:25.476198 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:47:25.476208 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-10-09 10:47:25.476218 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:47:25.476234 | orchestrator | 2025-10-09 10:47:25.476244 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-10-09 10:47:25.476253 | orchestrator | Thursday 09 October 2025 10:43:14 +0000 (0:00:03.840) 0:05:36.398 ****** 2025-10-09 10:47:25.476263 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:47:25.476272 | orchestrator | 2025-10-09 10:47:25.476282 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-10-09 10:47:25.476291 | orchestrator | Thursday 09 October 2025 10:43:15 +0000 (0:00:00.130) 0:05:36.529 ****** 2025-10-09 10:47:25.476301 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:47:25.476310 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:47:25.476320 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:47:25.476329 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.476339 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.476348 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.476358 | orchestrator | 2025-10-09 10:47:25.476367 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-10-09 10:47:25.476377 | orchestrator | Thursday 09 October 2025 10:43:15 +0000 (0:00:00.630) 0:05:37.159 ****** 2025-10-09 10:47:25.476386 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-10-09 10:47:25.476396 | orchestrator | 2025-10-09 10:47:25.476405 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-10-09 10:47:25.476415 | orchestrator | Thursday 09 October 2025 10:43:16 +0000 (0:00:00.720) 0:05:37.880 ****** 2025-10-09 10:47:25.476424 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:47:25.476434 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:47:25.476443 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:47:25.476453 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.476462 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.476472 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.476481 | orchestrator | 2025-10-09 10:47:25.476490 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-10-09 10:47:25.476500 | orchestrator | Thursday 09 October 2025 10:43:17 +0000 (0:00:00.911) 0:05:38.792 ****** 2025-10-09 10:47:25.476515 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476532 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476543 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476596 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476612 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476669 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476688 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476699 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476716 | orchestrator | 2025-10-09 10:47:25.476725 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-10-09 10:47:25.476735 | orchestrator | Thursday 09 October 2025 10:43:21 +0000 (0:00:04.063) 0:05:42.856 ****** 2025-10-09 10:47:25.476745 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:47:25.476756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:47:25.476766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:47:25.476780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:47:25.476798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:47:25.476815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:47:25.476825 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476835 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '32025-10-09 10:47:25 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:25.476845 | orchestrator | ', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476875 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.476943 | orchestrator | 2025-10-09 10:47:25.476953 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-10-09 10:47:25.476963 | orchestrator | Thursday 09 October 2025 10:43:30 +0000 (0:00:08.656) 0:05:51.512 ****** 2025-10-09 10:47:25.476973 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:47:25.476987 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:47:25.476997 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.477006 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:47:25.477016 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.477026 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.477046 | orchestrator | 2025-10-09 10:47:25.477056 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-10-09 10:47:25.477065 | orchestrator | Thursday 09 October 2025 10:43:31 +0000 (0:00:01.624) 0:05:53.136 ****** 2025-10-09 10:47:25.477075 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-10-09 10:47:25.477085 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-10-09 10:47:25.477095 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-10-09 10:47:25.477126 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-10-09 10:47:25.477137 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-10-09 10:47:25.477147 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.477156 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-10-09 10:47:25.477166 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-10-09 10:47:25.477175 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.477185 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-10-09 10:47:25.477195 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-10-09 10:47:25.477204 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.477214 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-10-09 10:47:25.477223 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-10-09 10:47:25.477233 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-10-09 10:47:25.477242 | orchestrator | 2025-10-09 10:47:25.477252 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-10-09 10:47:25.477261 | orchestrator | Thursday 09 October 2025 10:43:35 +0000 (0:00:04.233) 0:05:57.370 ****** 2025-10-09 10:47:25.477271 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:47:25.477280 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:47:25.477290 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:47:25.477299 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.477309 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.477318 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.477328 | orchestrator | 2025-10-09 10:47:25.477337 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-10-09 10:47:25.477347 | orchestrator | Thursday 09 October 2025 10:43:36 +0000 (0:00:00.679) 0:05:58.049 ****** 2025-10-09 10:47:25.477356 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-10-09 10:47:25.477366 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-10-09 10:47:25.477375 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-10-09 10:47:25.477385 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-10-09 10:47:25.477395 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-10-09 10:47:25.477404 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-10-09 10:47:25.477414 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-10-09 10:47:25.477423 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-10-09 10:47:25.477439 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-10-09 10:47:25.477449 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-10-09 10:47:25.477458 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.477468 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-10-09 10:47:25.477477 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.477487 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-10-09 10:47:25.477496 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.477506 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-10-09 10:47:25.477515 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-10-09 10:47:25.477525 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-10-09 10:47:25.477539 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-10-09 10:47:25.477549 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-10-09 10:47:25.477558 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-10-09 10:47:25.477568 | orchestrator | 2025-10-09 10:47:25.477577 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-10-09 10:47:25.477587 | orchestrator | Thursday 09 October 2025 10:43:43 +0000 (0:00:07.021) 0:06:05.071 ****** 2025-10-09 10:47:25.477596 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-10-09 10:47:25.477611 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-10-09 10:47:25.477621 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-10-09 10:47:25.477631 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-10-09 10:47:25.477640 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-10-09 10:47:25.477650 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-10-09 10:47:25.477659 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-10-09 10:47:25.477669 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-10-09 10:47:25.477678 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-10-09 10:47:25.477688 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-10-09 10:47:25.477697 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-10-09 10:47:25.477707 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-10-09 10:47:25.477716 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-10-09 10:47:25.477725 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.477735 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-10-09 10:47:25.477744 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.477754 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-10-09 10:47:25.477763 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-10-09 10:47:25.477773 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.477789 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-10-09 10:47:25.477799 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-10-09 10:47:25.477808 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-10-09 10:47:25.477818 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-10-09 10:47:25.477827 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-10-09 10:47:25.477837 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-10-09 10:47:25.477846 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-10-09 10:47:25.477856 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-10-09 10:47:25.477865 | orchestrator | 2025-10-09 10:47:25.477875 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-10-09 10:47:25.477884 | orchestrator | Thursday 09 October 2025 10:43:51 +0000 (0:00:07.418) 0:06:12.490 ****** 2025-10-09 10:47:25.477894 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:47:25.477903 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:47:25.477913 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:47:25.477922 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.477932 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.477941 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.477951 | orchestrator | 2025-10-09 10:47:25.477960 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-10-09 10:47:25.477970 | orchestrator | Thursday 09 October 2025 10:43:52 +0000 (0:00:01.075) 0:06:13.566 ****** 2025-10-09 10:47:25.477979 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:47:25.477989 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:47:25.477998 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:47:25.478007 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.478088 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.478101 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.478130 | orchestrator | 2025-10-09 10:47:25.478139 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-10-09 10:47:25.478149 | orchestrator | Thursday 09 October 2025 10:43:53 +0000 (0:00:00.913) 0:06:14.480 ****** 2025-10-09 10:47:25.478159 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:47:25.478168 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:47:25.478177 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.478187 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.478196 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.478206 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:47:25.478215 | orchestrator | 2025-10-09 10:47:25.478225 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-10-09 10:47:25.478239 | orchestrator | Thursday 09 October 2025 10:43:56 +0000 (0:00:03.468) 0:06:17.948 ****** 2025-10-09 10:47:25.478256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:47:25.478267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:47:25.478284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.478294 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:47:25.478304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-09 10:47:25.478314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.478324 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.478338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:47:25.478355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:47:25.478372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.478382 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:47:25.478392 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-10-09 10:47:25.478402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-10-09 10:47:25.478412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.478422 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:47:25.478437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-09 10:47:25.478459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.478470 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.478480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-10-09 10:47:25.478490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-10-09 10:47:25.478500 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.478509 | orchestrator | 2025-10-09 10:47:25.478519 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-10-09 10:47:25.478529 | orchestrator | Thursday 09 October 2025 10:43:59 +0000 (0:00:02.627) 0:06:20.576 ****** 2025-10-09 10:47:25.478539 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-10-09 10:47:25.478548 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-10-09 10:47:25.478558 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:47:25.478567 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-10-09 10:47:25.478577 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-10-09 10:47:25.478586 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:47:25.478596 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-10-09 10:47:25.478605 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-10-09 10:47:25.478615 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:47:25.478624 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-10-09 10:47:25.478634 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-10-09 10:47:25.478643 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.478653 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-10-09 10:47:25.478662 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-10-09 10:47:25.478672 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.478681 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-10-09 10:47:25.478691 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-10-09 10:47:25.478700 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.478710 | orchestrator | 2025-10-09 10:47:25.478719 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-10-09 10:47:25.478729 | orchestrator | Thursday 09 October 2025 10:44:01 +0000 (0:00:02.054) 0:06:22.630 ****** 2025-10-09 10:47:25.478743 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:47:25.478765 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:47:25.478776 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:47:25.478786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:47:25.478796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:47:25.478806 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.478832 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-10-09 10:47:25.478843 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:47:25.478853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-10-09 10:47:25.478863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.478873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.478883 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-10-09 10:47:25.478908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.478924 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.478935 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-10-09 10:47:25.478945 | orchestrator | 2025-10-09 10:47:25.478954 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-10-09 10:47:25.478964 | orchestrator | Thursday 09 October 2025 10:44:06 +0000 (0:00:04.862) 0:06:27.493 ****** 2025-10-09 10:47:25.478974 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:47:25.478984 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:47:25.478993 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:47:25.479003 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.479012 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.479022 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.479031 | orchestrator | 2025-10-09 10:47:25.479040 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-10-09 10:47:25.479050 | orchestrator | Thursday 09 October 2025 10:44:06 +0000 (0:00:00.669) 0:06:28.163 ****** 2025-10-09 10:47:25.479059 | orchestrator | 2025-10-09 10:47:25.479069 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-10-09 10:47:25.479078 | orchestrator | Thursday 09 October 2025 10:44:06 +0000 (0:00:00.122) 0:06:28.286 ****** 2025-10-09 10:47:25.479088 | orchestrator | 2025-10-09 10:47:25.479097 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-10-09 10:47:25.479148 | orchestrator | Thursday 09 October 2025 10:44:06 +0000 (0:00:00.124) 0:06:28.411 ****** 2025-10-09 10:47:25.479158 | orchestrator | 2025-10-09 10:47:25.479168 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-10-09 10:47:25.479177 | orchestrator | Thursday 09 October 2025 10:44:07 +0000 (0:00:00.122) 0:06:28.533 ****** 2025-10-09 10:47:25.479187 | orchestrator | 2025-10-09 10:47:25.479203 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-10-09 10:47:25.479213 | orchestrator | Thursday 09 October 2025 10:44:07 +0000 (0:00:00.119) 0:06:28.653 ****** 2025-10-09 10:47:25.479222 | orchestrator | 2025-10-09 10:47:25.479232 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-10-09 10:47:25.479241 | orchestrator | Thursday 09 October 2025 10:44:07 +0000 (0:00:00.118) 0:06:28.772 ****** 2025-10-09 10:47:25.479250 | orchestrator | 2025-10-09 10:47:25.479260 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-10-09 10:47:25.479269 | orchestrator | Thursday 09 October 2025 10:44:07 +0000 (0:00:00.237) 0:06:29.009 ****** 2025-10-09 10:47:25.479279 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:47:25.479289 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:25.479298 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:47:25.479308 | orchestrator | 2025-10-09 10:47:25.479317 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-10-09 10:47:25.479327 | orchestrator | Thursday 09 October 2025 10:44:21 +0000 (0:00:14.370) 0:06:43.380 ****** 2025-10-09 10:47:25.479336 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:25.479346 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:47:25.479355 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:47:25.479365 | orchestrator | 2025-10-09 10:47:25.479374 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-10-09 10:47:25.479384 | orchestrator | Thursday 09 October 2025 10:44:42 +0000 (0:00:20.762) 0:07:04.142 ****** 2025-10-09 10:47:25.479393 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:47:25.479403 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:47:25.479412 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:47:25.479422 | orchestrator | 2025-10-09 10:47:25.479431 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-10-09 10:47:25.479441 | orchestrator | Thursday 09 October 2025 10:45:02 +0000 (0:00:19.648) 0:07:23.790 ****** 2025-10-09 10:47:25.479450 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:47:25.479465 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:47:25.479475 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:47:25.479484 | orchestrator | 2025-10-09 10:47:25.479494 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-10-09 10:47:25.479503 | orchestrator | Thursday 09 October 2025 10:45:37 +0000 (0:00:34.897) 0:07:58.688 ****** 2025-10-09 10:47:25.479513 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:47:25.479522 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:47:25.479532 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:47:25.479541 | orchestrator | 2025-10-09 10:47:25.479550 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-10-09 10:47:25.479560 | orchestrator | Thursday 09 October 2025 10:45:38 +0000 (0:00:01.561) 0:08:00.249 ****** 2025-10-09 10:47:25.479570 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:47:25.479579 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:47:25.479588 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:47:25.479598 | orchestrator | 2025-10-09 10:47:25.479613 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-10-09 10:47:25.479623 | orchestrator | Thursday 09 October 2025 10:45:39 +0000 (0:00:01.049) 0:08:01.299 ****** 2025-10-09 10:47:25.479632 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:47:25.479642 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:47:25.479651 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:47:25.479661 | orchestrator | 2025-10-09 10:47:25.479670 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-10-09 10:47:25.479678 | orchestrator | Thursday 09 October 2025 10:46:09 +0000 (0:00:30.168) 0:08:31.468 ****** 2025-10-09 10:47:25.479686 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:47:25.479694 | orchestrator | 2025-10-09 10:47:25.479702 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-10-09 10:47:25.479715 | orchestrator | Thursday 09 October 2025 10:46:10 +0000 (0:00:00.135) 0:08:31.604 ****** 2025-10-09 10:47:25.479723 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.479731 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:47:25.479739 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:47:25.479746 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.479754 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.479762 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-10-09 10:47:25.479770 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:47:25.479778 | orchestrator | 2025-10-09 10:47:25.479786 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-10-09 10:47:25.479794 | orchestrator | Thursday 09 October 2025 10:46:34 +0000 (0:00:24.252) 0:08:55.857 ****** 2025-10-09 10:47:25.479802 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:47:25.479809 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.479817 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.479825 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:47:25.479832 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:47:25.479840 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.479848 | orchestrator | 2025-10-09 10:47:25.479855 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-10-09 10:47:25.479863 | orchestrator | Thursday 09 October 2025 10:46:43 +0000 (0:00:09.385) 0:09:05.242 ****** 2025-10-09 10:47:25.479871 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.479879 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:47:25.479886 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:47:25.479894 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.479902 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.479910 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-10-09 10:47:25.479917 | orchestrator | 2025-10-09 10:47:25.479925 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-10-09 10:47:25.479933 | orchestrator | Thursday 09 October 2025 10:46:48 +0000 (0:00:04.382) 0:09:09.625 ****** 2025-10-09 10:47:25.479941 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:47:25.479948 | orchestrator | 2025-10-09 10:47:25.479956 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-10-09 10:47:25.479964 | orchestrator | Thursday 09 October 2025 10:47:01 +0000 (0:00:13.523) 0:09:23.148 ****** 2025-10-09 10:47:25.479971 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:47:25.479979 | orchestrator | 2025-10-09 10:47:25.479987 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-10-09 10:47:25.479995 | orchestrator | Thursday 09 October 2025 10:47:03 +0000 (0:00:01.408) 0:09:24.556 ****** 2025-10-09 10:47:25.480002 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:47:25.480010 | orchestrator | 2025-10-09 10:47:25.480018 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-10-09 10:47:25.480026 | orchestrator | Thursday 09 October 2025 10:47:04 +0000 (0:00:01.463) 0:09:26.020 ****** 2025-10-09 10:47:25.480033 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:47:25.480041 | orchestrator | 2025-10-09 10:47:25.480049 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-10-09 10:47:25.480057 | orchestrator | Thursday 09 October 2025 10:47:16 +0000 (0:00:12.018) 0:09:38.038 ****** 2025-10-09 10:47:25.480064 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:47:25.480072 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:47:25.480080 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:47:25.480088 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:47:25.480095 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:47:25.480116 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:47:25.480124 | orchestrator | 2025-10-09 10:47:25.480132 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-10-09 10:47:25.480145 | orchestrator | 2025-10-09 10:47:25.480153 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-10-09 10:47:25.480161 | orchestrator | Thursday 09 October 2025 10:47:18 +0000 (0:00:01.834) 0:09:39.873 ****** 2025-10-09 10:47:25.480169 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:25.480176 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:47:25.480184 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:47:25.480192 | orchestrator | 2025-10-09 10:47:25.480204 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-10-09 10:47:25.480212 | orchestrator | 2025-10-09 10:47:25.480219 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-10-09 10:47:25.480227 | orchestrator | Thursday 09 October 2025 10:47:19 +0000 (0:00:01.147) 0:09:41.021 ****** 2025-10-09 10:47:25.480235 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.480243 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.480250 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.480258 | orchestrator | 2025-10-09 10:47:25.480266 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-10-09 10:47:25.480273 | orchestrator | 2025-10-09 10:47:25.480281 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-10-09 10:47:25.480289 | orchestrator | Thursday 09 October 2025 10:47:20 +0000 (0:00:00.525) 0:09:41.546 ****** 2025-10-09 10:47:25.480300 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-10-09 10:47:25.480309 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-10-09 10:47:25.480317 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-10-09 10:47:25.480324 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-10-09 10:47:25.480332 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-10-09 10:47:25.480340 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-10-09 10:47:25.480348 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:47:25.480355 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-10-09 10:47:25.480363 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-10-09 10:47:25.480371 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-10-09 10:47:25.480379 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-10-09 10:47:25.480386 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-10-09 10:47:25.480394 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-10-09 10:47:25.480402 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:47:25.480409 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-10-09 10:47:25.480417 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-10-09 10:47:25.480425 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-10-09 10:47:25.480433 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-10-09 10:47:25.480440 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-10-09 10:47:25.480448 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-10-09 10:47:25.480456 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:47:25.480464 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-10-09 10:47:25.480471 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-10-09 10:47:25.480479 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-10-09 10:47:25.480487 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-10-09 10:47:25.480495 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-10-09 10:47:25.480502 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-10-09 10:47:25.480510 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.480518 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-10-09 10:47:25.480531 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-10-09 10:47:25.480539 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-10-09 10:47:25.480547 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-10-09 10:47:25.480554 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-10-09 10:47:25.480562 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-10-09 10:47:25.480570 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.480577 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-10-09 10:47:25.480585 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-10-09 10:47:25.480593 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-10-09 10:47:25.480600 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-10-09 10:47:25.480608 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-10-09 10:47:25.480616 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-10-09 10:47:25.480624 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.480631 | orchestrator | 2025-10-09 10:47:25.480639 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-10-09 10:47:25.480647 | orchestrator | 2025-10-09 10:47:25.480655 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-10-09 10:47:25.480663 | orchestrator | Thursday 09 October 2025 10:47:21 +0000 (0:00:01.517) 0:09:43.063 ****** 2025-10-09 10:47:25.480670 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-10-09 10:47:25.480678 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-10-09 10:47:25.480686 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.480694 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-10-09 10:47:25.480701 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-10-09 10:47:25.480709 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.480717 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-10-09 10:47:25.480725 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-10-09 10:47:25.480733 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.480740 | orchestrator | 2025-10-09 10:47:25.480748 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-10-09 10:47:25.480756 | orchestrator | 2025-10-09 10:47:25.480764 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-10-09 10:47:25.480775 | orchestrator | Thursday 09 October 2025 10:47:22 +0000 (0:00:00.775) 0:09:43.839 ****** 2025-10-09 10:47:25.480783 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.480791 | orchestrator | 2025-10-09 10:47:25.480799 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-10-09 10:47:25.480807 | orchestrator | 2025-10-09 10:47:25.480814 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-10-09 10:47:25.480822 | orchestrator | Thursday 09 October 2025 10:47:23 +0000 (0:00:00.713) 0:09:44.553 ****** 2025-10-09 10:47:25.480830 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:25.480838 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:25.480846 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:25.480853 | orchestrator | 2025-10-09 10:47:25.480861 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:47:25.480873 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:47:25.480882 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-10-09 10:47:25.480890 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-10-09 10:47:25.480903 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-10-09 10:47:25.480911 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-10-09 10:47:25.480919 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-10-09 10:47:25.480927 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-10-09 10:47:25.480935 | orchestrator | 2025-10-09 10:47:25.480942 | orchestrator | 2025-10-09 10:47:25.480950 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:47:25.480958 | orchestrator | Thursday 09 October 2025 10:47:23 +0000 (0:00:00.426) 0:09:44.980 ****** 2025-10-09 10:47:25.480966 | orchestrator | =============================================================================== 2025-10-09 10:47:25.480974 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 34.90s 2025-10-09 10:47:25.480981 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.73s 2025-10-09 10:47:25.480989 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 30.17s 2025-10-09 10:47:25.480997 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 24.25s 2025-10-09 10:47:25.481005 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.70s 2025-10-09 10:47:25.481012 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 21.48s 2025-10-09 10:47:25.481020 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 20.76s 2025-10-09 10:47:25.481028 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.88s 2025-10-09 10:47:25.481036 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 19.65s 2025-10-09 10:47:25.481043 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.66s 2025-10-09 10:47:25.481051 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 14.37s 2025-10-09 10:47:25.481059 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.27s 2025-10-09 10:47:25.481066 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.57s 2025-10-09 10:47:25.481074 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.52s 2025-10-09 10:47:25.481082 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.90s 2025-10-09 10:47:25.481089 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.02s 2025-10-09 10:47:25.481097 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 11.33s 2025-10-09 10:47:25.481138 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.77s 2025-10-09 10:47:25.481146 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 9.44s 2025-10-09 10:47:25.481154 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.39s 2025-10-09 10:47:28.509602 | orchestrator | 2025-10-09 10:47:28 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state STARTED 2025-10-09 10:47:28.510987 | orchestrator | 2025-10-09 10:47:28 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:47:28.511018 | orchestrator | 2025-10-09 10:47:28 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:31.550600 | orchestrator | 2025-10-09 10:47:31 | INFO  | Task f7d3e2a5-7cd2-4858-bb54-853d5bfdd029 is in state SUCCESS 2025-10-09 10:47:31.552340 | orchestrator | 2025-10-09 10:47:31.552374 | orchestrator | 2025-10-09 10:47:31.552387 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:47:31.552399 | orchestrator | 2025-10-09 10:47:31.552450 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:47:31.552463 | orchestrator | Thursday 09 October 2025 10:44:58 +0000 (0:00:00.282) 0:00:00.282 ****** 2025-10-09 10:47:31.552474 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:47:31.552487 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:47:31.552498 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:47:31.552509 | orchestrator | 2025-10-09 10:47:31.552520 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:47:31.552531 | orchestrator | Thursday 09 October 2025 10:44:59 +0000 (0:00:00.322) 0:00:00.605 ****** 2025-10-09 10:47:31.552542 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-10-09 10:47:31.552553 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-10-09 10:47:31.552564 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-10-09 10:47:31.552575 | orchestrator | 2025-10-09 10:47:31.552586 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-10-09 10:47:31.552597 | orchestrator | 2025-10-09 10:47:31.552608 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-10-09 10:47:31.552619 | orchestrator | Thursday 09 October 2025 10:44:59 +0000 (0:00:00.529) 0:00:01.135 ****** 2025-10-09 10:47:31.552630 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:47:31.552642 | orchestrator | 2025-10-09 10:47:31.552653 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-10-09 10:47:31.552664 | orchestrator | Thursday 09 October 2025 10:45:00 +0000 (0:00:00.601) 0:00:01.736 ****** 2025-10-09 10:47:31.552678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:47:31.552694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:47:31.552706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:47:31.552718 | orchestrator | 2025-10-09 10:47:31.552730 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-10-09 10:47:31.552741 | orchestrator | Thursday 09 October 2025 10:45:01 +0000 (0:00:00.792) 0:00:02.528 ****** 2025-10-09 10:47:31.552752 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-10-09 10:47:31.552771 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-10-09 10:47:31.552783 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:47:31.552794 | orchestrator | 2025-10-09 10:47:31.552805 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-10-09 10:47:31.552815 | orchestrator | Thursday 09 October 2025 10:45:01 +0000 (0:00:00.829) 0:00:03.358 ****** 2025-10-09 10:47:31.552827 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:47:31.552837 | orchestrator | 2025-10-09 10:47:31.552848 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-10-09 10:47:31.552859 | orchestrator | Thursday 09 October 2025 10:45:02 +0000 (0:00:00.827) 0:00:04.186 ****** 2025-10-09 10:47:31.552890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:47:31.552903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:47:31.552915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:47:31.552926 | orchestrator | 2025-10-09 10:47:31.552939 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-10-09 10:47:31.552952 | orchestrator | Thursday 09 October 2025 10:45:04 +0000 (0:00:01.707) 0:00:05.894 ****** 2025-10-09 10:47:31.552965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-09 10:47:31.552978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-09 10:47:31.552998 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:31.553011 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:31.553031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-09 10:47:31.553050 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:31.553063 | orchestrator | 2025-10-09 10:47:31.553075 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-10-09 10:47:31.553087 | orchestrator | Thursday 09 October 2025 10:45:05 +0000 (0:00:00.622) 0:00:06.516 ****** 2025-10-09 10:47:31.553100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-09 10:47:31.553132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-09 10:47:31.553145 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:31.553156 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:31.553168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-10-09 10:47:31.553179 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:31.553190 | orchestrator | 2025-10-09 10:47:31.553202 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-10-09 10:47:31.553219 | orchestrator | Thursday 09 October 2025 10:45:06 +0000 (0:00:01.126) 0:00:07.642 ****** 2025-10-09 10:47:31.553231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:47:31.553242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:47:31.553266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:47:31.553279 | orchestrator | 2025-10-09 10:47:31.553290 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-10-09 10:47:31.553301 | orchestrator | Thursday 09 October 2025 10:45:07 +0000 (0:00:01.578) 0:00:09.221 ****** 2025-10-09 10:47:31.553312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:47:31.553323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:47:31.553335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:47:31.553353 | orchestrator | 2025-10-09 10:47:31.553364 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-10-09 10:47:31.553375 | orchestrator | Thursday 09 October 2025 10:45:09 +0000 (0:00:01.779) 0:00:11.001 ****** 2025-10-09 10:47:31.553386 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:31.553397 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:31.553408 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:31.553420 | orchestrator | 2025-10-09 10:47:31.553431 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-10-09 10:47:31.553442 | orchestrator | Thursday 09 October 2025 10:45:10 +0000 (0:00:00.579) 0:00:11.581 ****** 2025-10-09 10:47:31.553453 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-10-09 10:47:31.553464 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-10-09 10:47:31.553474 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-10-09 10:47:31.553485 | orchestrator | 2025-10-09 10:47:31.553496 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-10-09 10:47:31.553507 | orchestrator | Thursday 09 October 2025 10:45:11 +0000 (0:00:01.543) 0:00:13.124 ****** 2025-10-09 10:47:31.553517 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-10-09 10:47:31.553529 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-10-09 10:47:31.553539 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-10-09 10:47:31.553550 | orchestrator | 2025-10-09 10:47:31.553561 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-10-09 10:47:31.553572 | orchestrator | Thursday 09 October 2025 10:45:13 +0000 (0:00:01.415) 0:00:14.539 ****** 2025-10-09 10:47:31.553589 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-10-09 10:47:31.553600 | orchestrator | 2025-10-09 10:47:31.553615 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-10-09 10:47:31.553626 | orchestrator | Thursday 09 October 2025 10:45:14 +0000 (0:00:01.176) 0:00:15.716 ****** 2025-10-09 10:47:31.553637 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-10-09 10:47:31.553648 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-10-09 10:47:31.553659 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:47:31.553670 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:47:31.553681 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:47:31.553692 | orchestrator | 2025-10-09 10:47:31.553703 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-10-09 10:47:31.553714 | orchestrator | Thursday 09 October 2025 10:45:15 +0000 (0:00:00.769) 0:00:16.486 ****** 2025-10-09 10:47:31.553724 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:31.553735 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:31.553746 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:31.553757 | orchestrator | 2025-10-09 10:47:31.553768 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-10-09 10:47:31.553779 | orchestrator | Thursday 09 October 2025 10:45:15 +0000 (0:00:00.644) 0:00:17.130 ****** 2025-10-09 10:47:31.553790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1089455, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0248327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.553810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1089455, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0248327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.553822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1089455, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0248327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.553834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1089642, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1018333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.553851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1089642, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1018333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.553868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1089642, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1018333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.553880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1089476, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0293033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.553902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1089476, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0293033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.553914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1089476, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0293033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.553926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1089644, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1038334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.553937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1089644, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1038334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.553968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1089644, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1038334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.553981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1089497, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0353732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1089497, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0353732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1089497, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0353732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1089636, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0988333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1089636, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0988333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1089636, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0988333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1089451, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.022676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1089451, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.022676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1089451, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.022676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1089464, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0258327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1089464, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0258327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1089464, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0258327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1089479, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0308328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1089479, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0308328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1089479, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0308328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1089508, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0379279, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1089508, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0379279, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1089508, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0379279, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1089640, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1008334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1089640, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1008334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1089640, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1008334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1089468, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0278327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1089468, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0278327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1089560, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0988333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1089560, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0988333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1089468, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0278327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1089503, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0368328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1089503, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0368328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1089560, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0988333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1089493, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0338328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1089493, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0338328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1089503, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0368328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1089490, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0328329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1089490, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0328329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1089493, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0338328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1089516, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.041833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1089516, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.041833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1089490, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0328329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1089487, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0318327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1089487, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0318327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1089516, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.041833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1089637, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1008334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1089637, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1008334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1089487, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.0318327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1089763, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1508338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1089763, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1508338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1089637, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1008334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1089678, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1208336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1089678, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1208336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1089664, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1098335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1089763, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1508338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1089664, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1098335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1089709, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1258335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.554994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1089709, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1258335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1089678, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1208336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1089653, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1055875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1089653, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1055875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1089664, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1098335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1089736, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.138454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1089736, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.138454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1089709, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1258335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1089713, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1358337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1089713, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1358337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1089653, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1055875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1089741, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1388338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1089741, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1388338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1089736, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.138454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1089758, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1478338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1089758, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1478338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1089713, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1358337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1089734, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1368337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1089734, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1368337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1089741, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1388338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1089701, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.123238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1089701, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.123238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1089758, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1478338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1089675, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1158335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1089675, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1158335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1089734, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1368337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1089696, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1215873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1089696, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1215873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1089701, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.123238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1089667, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1148336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1089667, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1148336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1089675, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1158335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1089704, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1248336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1089704, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1248336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1089696, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1215873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1089750, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1458337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1089750, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1458337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1089667, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1148336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1089746, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1424012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1089746, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1424012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1089704, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1248336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1089656, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1078334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1089656, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1078334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1089750, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1458337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1089659, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1088336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1089659, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1088336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1089746, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1424012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1089732, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1358337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1089732, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1358337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1089656, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1078334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1089745, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1395137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1089745, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1395137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1089659, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1088336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1089732, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1358337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1089745, 'dev': 112, 'nlink': 1, 'atime': 1759968135.0, 'mtime': 1759968135.0, 'ctime': 1760003541.1395137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-10-09 10:47:31.555858 | orchestrator | 2025-10-09 10:47:31.555869 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-10-09 10:47:31.555880 | orchestrator | Thursday 09 October 2025 10:45:56 +0000 (0:00:41.310) 0:00:58.441 ****** 2025-10-09 10:47:31.555891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:47:31.555909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:47:31.555921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-10-09 10:47:31.555932 | orchestrator | 2025-10-09 10:47:31.555943 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-10-09 10:47:31.555954 | orchestrator | Thursday 09 October 2025 10:45:58 +0000 (0:00:01.042) 0:00:59.484 ****** 2025-10-09 10:47:31.555965 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:31.555976 | orchestrator | 2025-10-09 10:47:31.555987 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-10-09 10:47:31.555998 | orchestrator | Thursday 09 October 2025 10:46:00 +0000 (0:00:02.496) 0:01:01.980 ****** 2025-10-09 10:47:31.556009 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:31.556020 | orchestrator | 2025-10-09 10:47:31.556030 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-10-09 10:47:31.556041 | orchestrator | Thursday 09 October 2025 10:46:03 +0000 (0:00:02.469) 0:01:04.449 ****** 2025-10-09 10:47:31.556052 | orchestrator | 2025-10-09 10:47:31.556063 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-10-09 10:47:31.556079 | orchestrator | Thursday 09 October 2025 10:46:03 +0000 (0:00:00.086) 0:01:04.535 ****** 2025-10-09 10:47:31.556091 | orchestrator | 2025-10-09 10:47:31.556122 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-10-09 10:47:31.556135 | orchestrator | Thursday 09 October 2025 10:46:03 +0000 (0:00:00.062) 0:01:04.598 ****** 2025-10-09 10:47:31.556146 | orchestrator | 2025-10-09 10:47:31.556157 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-10-09 10:47:31.556168 | orchestrator | Thursday 09 October 2025 10:46:03 +0000 (0:00:00.253) 0:01:04.851 ****** 2025-10-09 10:47:31.556179 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:31.556190 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:31.556201 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:47:31.556212 | orchestrator | 2025-10-09 10:47:31.556223 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-10-09 10:47:31.556234 | orchestrator | Thursday 09 October 2025 10:46:05 +0000 (0:00:02.122) 0:01:06.973 ****** 2025-10-09 10:47:31.556245 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:31.556263 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:31.556274 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-10-09 10:47:31.556285 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-10-09 10:47:31.556296 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-10-09 10:47:31.556307 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:47:31.556318 | orchestrator | 2025-10-09 10:47:31.556329 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-10-09 10:47:31.556340 | orchestrator | Thursday 09 October 2025 10:46:44 +0000 (0:00:39.204) 0:01:46.178 ****** 2025-10-09 10:47:31.556351 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:31.556362 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:47:31.556373 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:47:31.556384 | orchestrator | 2025-10-09 10:47:31.556395 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-10-09 10:47:31.556406 | orchestrator | Thursday 09 October 2025 10:47:22 +0000 (0:00:38.135) 0:02:24.314 ****** 2025-10-09 10:47:31.556417 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:47:31.556428 | orchestrator | 2025-10-09 10:47:31.556439 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-10-09 10:47:31.556450 | orchestrator | Thursday 09 October 2025 10:47:25 +0000 (0:00:02.253) 0:02:26.568 ****** 2025-10-09 10:47:31.556461 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:31.556472 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:47:31.556483 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:47:31.556493 | orchestrator | 2025-10-09 10:47:31.556504 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-10-09 10:47:31.556515 | orchestrator | Thursday 09 October 2025 10:47:25 +0000 (0:00:00.521) 0:02:27.089 ****** 2025-10-09 10:47:31.556527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-10-09 10:47:31.556540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-10-09 10:47:31.556551 | orchestrator | 2025-10-09 10:47:31.556562 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-10-09 10:47:31.556573 | orchestrator | Thursday 09 October 2025 10:47:28 +0000 (0:00:02.486) 0:02:29.575 ****** 2025-10-09 10:47:31.556584 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:47:31.556595 | orchestrator | 2025-10-09 10:47:31.556606 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:47:31.556617 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-10-09 10:47:31.556628 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-10-09 10:47:31.556639 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-10-09 10:47:31.556650 | orchestrator | 2025-10-09 10:47:31.556661 | orchestrator | 2025-10-09 10:47:31.556672 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:47:31.556682 | orchestrator | Thursday 09 October 2025 10:47:28 +0000 (0:00:00.258) 0:02:29.834 ****** 2025-10-09 10:47:31.556693 | orchestrator | =============================================================================== 2025-10-09 10:47:31.556709 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 41.31s 2025-10-09 10:47:31.556720 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 39.20s 2025-10-09 10:47:31.556731 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 38.14s 2025-10-09 10:47:31.556742 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.50s 2025-10-09 10:47:31.556753 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.49s 2025-10-09 10:47:31.556769 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.47s 2025-10-09 10:47:31.556785 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.25s 2025-10-09 10:47:31.556796 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.12s 2025-10-09 10:47:31.556807 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.78s 2025-10-09 10:47:31.556818 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.71s 2025-10-09 10:47:31.556828 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.58s 2025-10-09 10:47:31.556839 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.54s 2025-10-09 10:47:31.556850 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.42s 2025-10-09 10:47:31.556860 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.18s 2025-10-09 10:47:31.556871 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.13s 2025-10-09 10:47:31.556882 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.04s 2025-10-09 10:47:31.556893 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.83s 2025-10-09 10:47:31.556903 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.83s 2025-10-09 10:47:31.556914 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.79s 2025-10-09 10:47:31.556925 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.77s 2025-10-09 10:47:31.556935 | orchestrator | 2025-10-09 10:47:31 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:47:31.556947 | orchestrator | 2025-10-09 10:47:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:34.600224 | orchestrator | 2025-10-09 10:47:34 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:47:34.600330 | orchestrator | 2025-10-09 10:47:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:37.658558 | orchestrator | 2025-10-09 10:47:37 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:47:37.658657 | orchestrator | 2025-10-09 10:47:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:40.710707 | orchestrator | 2025-10-09 10:47:40 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:47:40.710807 | orchestrator | 2025-10-09 10:47:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:43.757621 | orchestrator | 2025-10-09 10:47:43 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:47:43.757712 | orchestrator | 2025-10-09 10:47:43 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:46.808063 | orchestrator | 2025-10-09 10:47:46 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:47:46.808187 | orchestrator | 2025-10-09 10:47:46 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:49.868233 | orchestrator | 2025-10-09 10:47:49 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:47:49.868330 | orchestrator | 2025-10-09 10:47:49 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:52.918323 | orchestrator | 2025-10-09 10:47:52 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:47:52.918434 | orchestrator | 2025-10-09 10:47:52 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:55.954697 | orchestrator | 2025-10-09 10:47:55 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:47:55.954798 | orchestrator | 2025-10-09 10:47:55 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:47:59.008049 | orchestrator | 2025-10-09 10:47:59 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:47:59.008204 | orchestrator | 2025-10-09 10:47:59 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:02.047223 | orchestrator | 2025-10-09 10:48:02 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:48:02.047322 | orchestrator | 2025-10-09 10:48:02 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:05.091597 | orchestrator | 2025-10-09 10:48:05 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:48:05.091701 | orchestrator | 2025-10-09 10:48:05 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:08.133383 | orchestrator | 2025-10-09 10:48:08 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:48:08.133474 | orchestrator | 2025-10-09 10:48:08 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:11.182981 | orchestrator | 2025-10-09 10:48:11 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:48:11.183181 | orchestrator | 2025-10-09 10:48:11 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:14.230503 | orchestrator | 2025-10-09 10:48:14 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:48:14.231228 | orchestrator | 2025-10-09 10:48:14 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:17.272469 | orchestrator | 2025-10-09 10:48:17 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:48:17.272567 | orchestrator | 2025-10-09 10:48:17 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:20.315606 | orchestrator | 2025-10-09 10:48:20 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:48:20.315714 | orchestrator | 2025-10-09 10:48:20 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:23.370761 | orchestrator | 2025-10-09 10:48:23 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:48:23.370868 | orchestrator | 2025-10-09 10:48:23 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:26.409264 | orchestrator | 2025-10-09 10:48:26 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:48:26.409357 | orchestrator | 2025-10-09 10:48:26 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:29.452799 | orchestrator | 2025-10-09 10:48:29 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:48:29.452903 | orchestrator | 2025-10-09 10:48:29 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:32.495477 | orchestrator | 2025-10-09 10:48:32 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:48:32.495584 | orchestrator | 2025-10-09 10:48:32 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:35.537713 | orchestrator | 2025-10-09 10:48:35 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:48:35.537811 | orchestrator | 2025-10-09 10:48:35 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:38.571550 | orchestrator | 2025-10-09 10:48:38 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:48:38.571654 | orchestrator | 2025-10-09 10:48:38 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:41.609756 | orchestrator | 2025-10-09 10:48:41 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:48:41.609856 | orchestrator | 2025-10-09 10:48:41 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:44.649866 | orchestrator | 2025-10-09 10:48:44 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:48:44.649961 | orchestrator | 2025-10-09 10:48:44 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:47.690251 | orchestrator | 2025-10-09 10:48:47 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:48:47.690356 | orchestrator | 2025-10-09 10:48:47 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:50.742993 | orchestrator | 2025-10-09 10:48:50 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:48:50.743115 | orchestrator | 2025-10-09 10:48:50 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:53.794230 | orchestrator | 2025-10-09 10:48:53 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:48:53.794297 | orchestrator | 2025-10-09 10:48:53 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:56.845226 | orchestrator | 2025-10-09 10:48:56 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:48:56.845348 | orchestrator | 2025-10-09 10:48:56 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:48:59.895963 | orchestrator | 2025-10-09 10:48:59 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:48:59.896125 | orchestrator | 2025-10-09 10:48:59 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:49:02.940281 | orchestrator | 2025-10-09 10:49:02 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:49:02.940408 | orchestrator | 2025-10-09 10:49:02 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:49:05.980853 | orchestrator | 2025-10-09 10:49:05 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:49:05.980988 | orchestrator | 2025-10-09 10:49:05 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:49:09.033310 | orchestrator | 2025-10-09 10:49:09 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:49:09.033442 | orchestrator | 2025-10-09 10:49:09 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:49:12.074484 | orchestrator | 2025-10-09 10:49:12 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:49:12.074586 | orchestrator | 2025-10-09 10:49:12 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:49:15.115471 | orchestrator | 2025-10-09 10:49:15 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:49:15.115569 | orchestrator | 2025-10-09 10:49:15 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:49:18.157788 | orchestrator | 2025-10-09 10:49:18 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:49:18.157894 | orchestrator | 2025-10-09 10:49:18 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:49:21.202582 | orchestrator | 2025-10-09 10:49:21 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:49:21.202680 | orchestrator | 2025-10-09 10:49:21 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:49:24.248079 | orchestrator | 2025-10-09 10:49:24 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:49:24.248223 | orchestrator | 2025-10-09 10:49:24 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:49:27.287914 | orchestrator | 2025-10-09 10:49:27 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:49:27.288000 | orchestrator | 2025-10-09 10:49:27 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:49:30.336813 | orchestrator | 2025-10-09 10:49:30 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:49:30.336920 | orchestrator | 2025-10-09 10:49:30 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:49:33.387492 | orchestrator | 2025-10-09 10:49:33 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:49:33.387593 | orchestrator | 2025-10-09 10:49:33 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:49:36.436248 | orchestrator | 2025-10-09 10:49:36 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:49:36.436354 | orchestrator | 2025-10-09 10:49:36 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:49:39.487237 | orchestrator | 2025-10-09 10:49:39 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:49:39.487336 | orchestrator | 2025-10-09 10:49:39 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:49:42.535863 | orchestrator | 2025-10-09 10:49:42 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:49:42.535962 | orchestrator | 2025-10-09 10:49:42 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:49:45.576319 | orchestrator | 2025-10-09 10:49:45 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:49:45.576424 | orchestrator | 2025-10-09 10:49:45 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:49:48.624377 | orchestrator | 2025-10-09 10:49:48 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:49:48.624505 | orchestrator | 2025-10-09 10:49:48 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:49:51.670913 | orchestrator | 2025-10-09 10:49:51 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:49:51.671018 | orchestrator | 2025-10-09 10:49:51 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:49:54.711832 | orchestrator | 2025-10-09 10:49:54 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:49:54.711938 | orchestrator | 2025-10-09 10:49:54 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:49:57.757937 | orchestrator | 2025-10-09 10:49:57 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:49:57.758065 | orchestrator | 2025-10-09 10:49:57 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:50:00.803306 | orchestrator | 2025-10-09 10:50:00 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:50:00.803405 | orchestrator | 2025-10-09 10:50:00 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:50:03.846935 | orchestrator | 2025-10-09 10:50:03 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:50:03.847023 | orchestrator | 2025-10-09 10:50:03 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:50:06.894533 | orchestrator | 2025-10-09 10:50:06 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:50:06.894643 | orchestrator | 2025-10-09 10:50:06 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:50:09.941442 | orchestrator | 2025-10-09 10:50:09 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:50:09.941541 | orchestrator | 2025-10-09 10:50:09 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:50:12.998398 | orchestrator | 2025-10-09 10:50:12 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:50:12.998507 | orchestrator | 2025-10-09 10:50:12 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:50:16.056008 | orchestrator | 2025-10-09 10:50:16 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:50:16.056127 | orchestrator | 2025-10-09 10:50:16 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:50:19.095833 | orchestrator | 2025-10-09 10:50:19 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:50:19.095947 | orchestrator | 2025-10-09 10:50:19 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:50:22.133605 | orchestrator | 2025-10-09 10:50:22 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:50:22.133693 | orchestrator | 2025-10-09 10:50:22 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:50:25.171311 | orchestrator | 2025-10-09 10:50:25 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:50:25.171421 | orchestrator | 2025-10-09 10:50:25 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:50:28.246760 | orchestrator | 2025-10-09 10:50:28 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:50:28.246854 | orchestrator | 2025-10-09 10:50:28 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:50:31.291149 | orchestrator | 2025-10-09 10:50:31 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:50:31.291260 | orchestrator | 2025-10-09 10:50:31 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:50:34.332014 | orchestrator | 2025-10-09 10:50:34 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:50:34.332162 | orchestrator | 2025-10-09 10:50:34 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:50:37.369541 | orchestrator | 2025-10-09 10:50:37 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:50:37.369635 | orchestrator | 2025-10-09 10:50:37 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:50:40.411695 | orchestrator | 2025-10-09 10:50:40 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state STARTED 2025-10-09 10:50:40.411789 | orchestrator | 2025-10-09 10:50:40 | INFO  | Wait 1 second(s) until the next check 2025-10-09 10:50:43.452239 | orchestrator | 2025-10-09 10:50:43 | INFO  | Task ca9337e4-8c75-4625-a69e-9ab54731b5ee is in state SUCCESS 2025-10-09 10:50:43.454788 | orchestrator | 2025-10-09 10:50:43.454845 | orchestrator | 2025-10-09 10:50:43.454859 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:50:43.454969 | orchestrator | 2025-10-09 10:50:43.454983 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:50:43.454995 | orchestrator | Thursday 09 October 2025 10:45:47 +0000 (0:00:00.315) 0:00:00.315 ****** 2025-10-09 10:50:43.455006 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:50:43.455019 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:50:43.455030 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:50:43.455056 | orchestrator | 2025-10-09 10:50:43.455097 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:50:43.455156 | orchestrator | Thursday 09 October 2025 10:45:48 +0000 (0:00:00.285) 0:00:00.600 ****** 2025-10-09 10:50:43.455276 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-10-09 10:50:43.455348 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-10-09 10:50:43.455360 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-10-09 10:50:43.455371 | orchestrator | 2025-10-09 10:50:43.455383 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-10-09 10:50:43.455396 | orchestrator | 2025-10-09 10:50:43.455408 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-10-09 10:50:43.455421 | orchestrator | Thursday 09 October 2025 10:45:48 +0000 (0:00:00.409) 0:00:01.010 ****** 2025-10-09 10:50:43.455460 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:50:43.455474 | orchestrator | 2025-10-09 10:50:43.455486 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-10-09 10:50:43.455499 | orchestrator | Thursday 09 October 2025 10:45:49 +0000 (0:00:00.684) 0:00:01.695 ****** 2025-10-09 10:50:43.455511 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-10-09 10:50:43.455523 | orchestrator | 2025-10-09 10:50:43.455535 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-10-09 10:50:43.455547 | orchestrator | Thursday 09 October 2025 10:45:53 +0000 (0:00:03.798) 0:00:05.493 ****** 2025-10-09 10:50:43.455559 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-10-09 10:50:43.455582 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-10-09 10:50:43.455596 | orchestrator | 2025-10-09 10:50:43.455608 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-10-09 10:50:43.455620 | orchestrator | Thursday 09 October 2025 10:45:59 +0000 (0:00:06.847) 0:00:12.340 ****** 2025-10-09 10:50:43.455632 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-10-09 10:50:43.455644 | orchestrator | 2025-10-09 10:50:43.455656 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-10-09 10:50:43.455669 | orchestrator | Thursday 09 October 2025 10:46:03 +0000 (0:00:03.372) 0:00:15.713 ****** 2025-10-09 10:50:43.455681 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-10-09 10:50:43.455694 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-10-09 10:50:43.455706 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-10-09 10:50:43.455718 | orchestrator | 2025-10-09 10:50:43.455730 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-10-09 10:50:43.455741 | orchestrator | Thursday 09 October 2025 10:46:12 +0000 (0:00:08.711) 0:00:24.424 ****** 2025-10-09 10:50:43.455752 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-10-09 10:50:43.455763 | orchestrator | 2025-10-09 10:50:43.455773 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-10-09 10:50:43.455784 | orchestrator | Thursday 09 October 2025 10:46:15 +0000 (0:00:03.746) 0:00:28.171 ****** 2025-10-09 10:50:43.455795 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-10-09 10:50:43.455806 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-10-09 10:50:43.455833 | orchestrator | 2025-10-09 10:50:43.455846 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-10-09 10:50:43.455857 | orchestrator | Thursday 09 October 2025 10:46:23 +0000 (0:00:07.847) 0:00:36.019 ****** 2025-10-09 10:50:43.455908 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-10-09 10:50:43.455919 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-10-09 10:50:43.455930 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-10-09 10:50:43.455941 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-10-09 10:50:43.455952 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-10-09 10:50:43.455972 | orchestrator | 2025-10-09 10:50:43.455983 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-10-09 10:50:43.455994 | orchestrator | Thursday 09 October 2025 10:46:40 +0000 (0:00:17.214) 0:00:53.234 ****** 2025-10-09 10:50:43.456004 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:50:43.456015 | orchestrator | 2025-10-09 10:50:43.456026 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-10-09 10:50:43.456037 | orchestrator | Thursday 09 October 2025 10:46:41 +0000 (0:00:00.919) 0:00:54.153 ****** 2025-10-09 10:50:43.456047 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.456058 | orchestrator | 2025-10-09 10:50:43.456217 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-10-09 10:50:43.456230 | orchestrator | Thursday 09 October 2025 10:46:47 +0000 (0:00:05.456) 0:00:59.610 ****** 2025-10-09 10:50:43.456241 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.456251 | orchestrator | 2025-10-09 10:50:43.456262 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-10-09 10:50:43.456289 | orchestrator | Thursday 09 October 2025 10:46:51 +0000 (0:00:04.681) 0:01:04.291 ****** 2025-10-09 10:50:43.456301 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:50:43.456311 | orchestrator | 2025-10-09 10:50:43.456322 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-10-09 10:50:43.456333 | orchestrator | Thursday 09 October 2025 10:46:55 +0000 (0:00:03.346) 0:01:07.638 ****** 2025-10-09 10:50:43.456344 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-10-09 10:50:43.456355 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-10-09 10:50:43.456366 | orchestrator | 2025-10-09 10:50:43.456377 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-10-09 10:50:43.456388 | orchestrator | Thursday 09 October 2025 10:47:05 +0000 (0:00:10.268) 0:01:17.906 ****** 2025-10-09 10:50:43.456399 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-10-09 10:50:43.456410 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-10-09 10:50:43.456423 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-10-09 10:50:43.456435 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-10-09 10:50:43.456446 | orchestrator | 2025-10-09 10:50:43.456457 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-10-09 10:50:43.456468 | orchestrator | Thursday 09 October 2025 10:47:22 +0000 (0:00:17.273) 0:01:35.179 ****** 2025-10-09 10:50:43.456479 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.456489 | orchestrator | 2025-10-09 10:50:43.456500 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-10-09 10:50:43.456511 | orchestrator | Thursday 09 October 2025 10:47:27 +0000 (0:00:05.035) 0:01:40.214 ****** 2025-10-09 10:50:43.456522 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.456533 | orchestrator | 2025-10-09 10:50:43.456543 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-10-09 10:50:43.456561 | orchestrator | Thursday 09 October 2025 10:47:33 +0000 (0:00:05.651) 0:01:45.866 ****** 2025-10-09 10:50:43.456572 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:50:43.456583 | orchestrator | 2025-10-09 10:50:43.456594 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-10-09 10:50:43.456604 | orchestrator | Thursday 09 October 2025 10:47:33 +0000 (0:00:00.223) 0:01:46.090 ****** 2025-10-09 10:50:43.456615 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.456626 | orchestrator | 2025-10-09 10:50:43.456637 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-10-09 10:50:43.456655 | orchestrator | Thursday 09 October 2025 10:47:39 +0000 (0:00:05.597) 0:01:51.687 ****** 2025-10-09 10:50:43.456666 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:50:43.456677 | orchestrator | 2025-10-09 10:50:43.456687 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-10-09 10:50:43.456698 | orchestrator | Thursday 09 October 2025 10:47:40 +0000 (0:00:01.077) 0:01:52.765 ****** 2025-10-09 10:50:43.456708 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:50:43.456719 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.456730 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:50:43.456741 | orchestrator | 2025-10-09 10:50:43.456751 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-10-09 10:50:43.456762 | orchestrator | Thursday 09 October 2025 10:47:45 +0000 (0:00:05.472) 0:01:58.238 ****** 2025-10-09 10:50:43.456773 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:50:43.456783 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:50:43.456794 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.456805 | orchestrator | 2025-10-09 10:50:43.456815 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-10-09 10:50:43.456826 | orchestrator | Thursday 09 October 2025 10:47:50 +0000 (0:00:04.689) 0:02:02.928 ****** 2025-10-09 10:50:43.456837 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.456848 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:50:43.456858 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:50:43.456869 | orchestrator | 2025-10-09 10:50:43.456880 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-10-09 10:50:43.456891 | orchestrator | Thursday 09 October 2025 10:47:51 +0000 (0:00:00.848) 0:02:03.777 ****** 2025-10-09 10:50:43.456901 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:50:43.456912 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:50:43.456923 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:50:43.456934 | orchestrator | 2025-10-09 10:50:43.456944 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-10-09 10:50:43.456955 | orchestrator | Thursday 09 October 2025 10:47:53 +0000 (0:00:02.333) 0:02:06.110 ****** 2025-10-09 10:50:43.456966 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:50:43.456977 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.456987 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:50:43.456998 | orchestrator | 2025-10-09 10:50:43.457009 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-10-09 10:50:43.457020 | orchestrator | Thursday 09 October 2025 10:47:55 +0000 (0:00:01.523) 0:02:07.634 ****** 2025-10-09 10:50:43.457031 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.457041 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:50:43.457052 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:50:43.457063 | orchestrator | 2025-10-09 10:50:43.457090 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-10-09 10:50:43.457101 | orchestrator | Thursday 09 October 2025 10:47:56 +0000 (0:00:01.192) 0:02:08.827 ****** 2025-10-09 10:50:43.457112 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:50:43.457123 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.457134 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:50:43.457145 | orchestrator | 2025-10-09 10:50:43.457162 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-10-09 10:50:43.457173 | orchestrator | Thursday 09 October 2025 10:47:58 +0000 (0:00:02.170) 0:02:10.997 ****** 2025-10-09 10:50:43.457184 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.457195 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:50:43.457205 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:50:43.457216 | orchestrator | 2025-10-09 10:50:43.457227 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-10-09 10:50:43.457238 | orchestrator | Thursday 09 October 2025 10:48:00 +0000 (0:00:01.888) 0:02:12.886 ****** 2025-10-09 10:50:43.457255 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:50:43.457266 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:50:43.457277 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:50:43.457288 | orchestrator | 2025-10-09 10:50:43.457298 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-10-09 10:50:43.457309 | orchestrator | Thursday 09 October 2025 10:48:02 +0000 (0:00:01.714) 0:02:14.600 ****** 2025-10-09 10:50:43.457320 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:50:43.457331 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:50:43.457342 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:50:43.457353 | orchestrator | 2025-10-09 10:50:43.457364 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-10-09 10:50:43.457374 | orchestrator | Thursday 09 October 2025 10:48:05 +0000 (0:00:02.953) 0:02:17.554 ****** 2025-10-09 10:50:43.457385 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:50:43.457396 | orchestrator | 2025-10-09 10:50:43.457407 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-10-09 10:50:43.457418 | orchestrator | Thursday 09 October 2025 10:48:06 +0000 (0:00:00.833) 0:02:18.387 ****** 2025-10-09 10:50:43.457429 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:50:43.457439 | orchestrator | 2025-10-09 10:50:43.457450 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-10-09 10:50:43.457461 | orchestrator | Thursday 09 October 2025 10:48:09 +0000 (0:00:03.408) 0:02:21.796 ****** 2025-10-09 10:50:43.457472 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:50:43.457483 | orchestrator | 2025-10-09 10:50:43.457493 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-10-09 10:50:43.457509 | orchestrator | Thursday 09 October 2025 10:48:12 +0000 (0:00:03.440) 0:02:25.236 ****** 2025-10-09 10:50:43.457520 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-10-09 10:50:43.457531 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-10-09 10:50:43.457542 | orchestrator | 2025-10-09 10:50:43.457553 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-10-09 10:50:43.457564 | orchestrator | Thursday 09 October 2025 10:48:20 +0000 (0:00:07.350) 0:02:32.587 ****** 2025-10-09 10:50:43.457574 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:50:43.457585 | orchestrator | 2025-10-09 10:50:43.457596 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-10-09 10:50:43.457606 | orchestrator | Thursday 09 October 2025 10:48:23 +0000 (0:00:03.593) 0:02:36.180 ****** 2025-10-09 10:50:43.457617 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:50:43.457628 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:50:43.457639 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:50:43.457649 | orchestrator | 2025-10-09 10:50:43.457660 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-10-09 10:50:43.457671 | orchestrator | Thursday 09 October 2025 10:48:24 +0000 (0:00:00.352) 0:02:36.533 ****** 2025-10-09 10:50:43.457685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:50:43.457707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:50:43.457726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:50:43.457738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:50:43.457756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:50:43.457767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:50:43.457779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.457807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.457825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.457838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.457849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.457865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.457877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:50:43.457889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:50:43.457907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:50:43.457918 | orchestrator | 2025-10-09 10:50:43.457930 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-10-09 10:50:43.457941 | orchestrator | Thursday 09 October 2025 10:48:26 +0000 (0:00:02.620) 0:02:39.153 ****** 2025-10-09 10:50:43.457952 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:50:43.457963 | orchestrator | 2025-10-09 10:50:43.457979 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-10-09 10:50:43.457990 | orchestrator | Thursday 09 October 2025 10:48:26 +0000 (0:00:00.143) 0:02:39.296 ****** 2025-10-09 10:50:43.458001 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:50:43.458012 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:50:43.458091 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:50:43.458103 | orchestrator | 2025-10-09 10:50:43.458114 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-10-09 10:50:43.458125 | orchestrator | Thursday 09 October 2025 10:48:27 +0000 (0:00:00.556) 0:02:39.853 ****** 2025-10-09 10:50:43.458137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:50:43.458154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:50:43.458166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:50:43.458185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:50:43.458197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:50:43.458208 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:50:43.458229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:50:43.458241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:50:43.458257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:50:43.458269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:50:43.458287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:50:43.458298 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:50:43.458310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:50:43.458329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:50:43.458341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:50:43.458352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:50:43.458368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:50:43.458386 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:50:43.458398 | orchestrator | 2025-10-09 10:50:43.458409 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-10-09 10:50:43.458420 | orchestrator | Thursday 09 October 2025 10:48:28 +0000 (0:00:00.761) 0:02:40.615 ****** 2025-10-09 10:50:43.458431 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:50:43.458442 | orchestrator | 2025-10-09 10:50:43.458453 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-10-09 10:50:43.458464 | orchestrator | Thursday 09 October 2025 10:48:28 +0000 (0:00:00.578) 0:02:41.193 ****** 2025-10-09 10:50:43.458475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:50:43.458728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:50:43.458816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:50:43.458844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:50:43.458878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:50:43.458891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:50:43.458903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.458915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.458944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.458956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.458975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.458994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.459006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:50:43.459018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:50:43.459037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:50:43.459051 | orchestrator | 2025-10-09 10:50:43.459064 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-10-09 10:50:43.459103 | orchestrator | Thursday 09 October 2025 10:48:34 +0000 (0:00:05.431) 0:02:46.625 ****** 2025-10-09 10:50:43.459117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:50:43.459134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:50:43.459153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:50:43.459165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:50:43.459176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:50:43.459187 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:50:43.459207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:50:43.459219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:50:43.459230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:50:43.459253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:50:43.459266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:50:43.459278 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:50:43.459292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:50:43.459310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:50:43.459323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:50:43.459335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:50:43.459359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:50:43.459372 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:50:43.459384 | orchestrator | 2025-10-09 10:50:43.459397 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-10-09 10:50:43.459409 | orchestrator | Thursday 09 October 2025 10:48:35 +0000 (0:00:00.933) 0:02:47.559 ****** 2025-10-09 10:50:43.459422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:50:43.459436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:50:43.459448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:50:43.459467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:50:43.459487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:50:43.459504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:50:43.459518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:50:43.459529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:50:43.459541 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:50:43.459553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:50:43.459571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:50:43.459589 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:50:43.459601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-10-09 10:50:43.459618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-10-09 10:50:43.459630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-10-09 10:50:43.459641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-10-09 10:50:43.459652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-10-09 10:50:43.459664 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:50:43.459675 | orchestrator | 2025-10-09 10:50:43.459686 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-10-09 10:50:43.459697 | orchestrator | Thursday 09 October 2025 10:48:36 +0000 (0:00:01.022) 0:02:48.581 ****** 2025-10-09 10:50:43.459715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:50:43.459744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:50:43.459756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:50:43.459768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:50:43.459779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:50:43.459791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:50:43.459816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.459828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.459844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.459856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.459867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.459879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.459897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:50:43.459917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:50:43.459929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:50:43.459940 | orchestrator | 2025-10-09 10:50:43.459951 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-10-09 10:50:43.459962 | orchestrator | Thursday 09 October 2025 10:48:41 +0000 (0:00:05.144) 0:02:53.725 ****** 2025-10-09 10:50:43.459978 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-10-09 10:50:43.459990 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-10-09 10:50:43.460001 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-10-09 10:50:43.460012 | orchestrator | 2025-10-09 10:50:43.460023 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-10-09 10:50:43.460035 | orchestrator | Thursday 09 October 2025 10:48:43 +0000 (0:00:01.842) 0:02:55.568 ****** 2025-10-09 10:50:43.460046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:50:43.460058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:50:43.460113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:50:43.460127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:50:43.460144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:50:43.460156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:50:43.460167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.460179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.460197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.460215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.460227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.460244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.460256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:50:43.460267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:50:43.460279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:50:43.460297 | orchestrator | 2025-10-09 10:50:43.460308 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-10-09 10:50:43.460319 | orchestrator | Thursday 09 October 2025 10:49:00 +0000 (0:00:16.926) 0:03:12.495 ****** 2025-10-09 10:50:43.460331 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.460342 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:50:43.460353 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:50:43.460364 | orchestrator | 2025-10-09 10:50:43.460375 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-10-09 10:50:43.460386 | orchestrator | Thursday 09 October 2025 10:49:01 +0000 (0:00:01.638) 0:03:14.133 ****** 2025-10-09 10:50:43.460396 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-10-09 10:50:43.460408 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-10-09 10:50:43.460424 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-10-09 10:50:43.460435 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-10-09 10:50:43.460447 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-10-09 10:50:43.460458 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-10-09 10:50:43.460469 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-10-09 10:50:43.460480 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-10-09 10:50:43.460491 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-10-09 10:50:43.460502 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-10-09 10:50:43.460512 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-10-09 10:50:43.460523 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-10-09 10:50:43.460534 | orchestrator | 2025-10-09 10:50:43.460545 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-10-09 10:50:43.460556 | orchestrator | Thursday 09 October 2025 10:49:07 +0000 (0:00:05.520) 0:03:19.654 ****** 2025-10-09 10:50:43.460567 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-10-09 10:50:43.460578 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-10-09 10:50:43.460589 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-10-09 10:50:43.460600 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-10-09 10:50:43.460611 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-10-09 10:50:43.460622 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-10-09 10:50:43.460633 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-10-09 10:50:43.460644 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-10-09 10:50:43.460654 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-10-09 10:50:43.460665 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-10-09 10:50:43.460676 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-10-09 10:50:43.460692 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-10-09 10:50:43.460703 | orchestrator | 2025-10-09 10:50:43.460714 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-10-09 10:50:43.460725 | orchestrator | Thursday 09 October 2025 10:49:12 +0000 (0:00:05.699) 0:03:25.354 ****** 2025-10-09 10:50:43.460736 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-10-09 10:50:43.460747 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-10-09 10:50:43.460764 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-10-09 10:50:43.460775 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-10-09 10:50:43.460786 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-10-09 10:50:43.460797 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-10-09 10:50:43.460808 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-10-09 10:50:43.460819 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-10-09 10:50:43.460830 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-10-09 10:50:43.460840 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-10-09 10:50:43.460851 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-10-09 10:50:43.460862 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-10-09 10:50:43.460873 | orchestrator | 2025-10-09 10:50:43.460884 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-10-09 10:50:43.460895 | orchestrator | Thursday 09 October 2025 10:49:18 +0000 (0:00:05.202) 0:03:30.556 ****** 2025-10-09 10:50:43.460906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:50:43.460926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:50:43.460938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-10-09 10:50:43.460963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:50:43.460975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:50:43.460986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-10-09 10:50:43.460998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.461015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.461027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.461038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.461064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.461130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-10-09 10:50:43.461142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:50:43.461154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:50:43.461173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-10-09 10:50:43.461185 | orchestrator | 2025-10-09 10:50:43.461196 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-10-09 10:50:43.461207 | orchestrator | Thursday 09 October 2025 10:49:22 +0000 (0:00:03.824) 0:03:34.381 ****** 2025-10-09 10:50:43.461218 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:50:43.461229 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:50:43.461240 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:50:43.461251 | orchestrator | 2025-10-09 10:50:43.461262 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-10-09 10:50:43.461273 | orchestrator | Thursday 09 October 2025 10:49:22 +0000 (0:00:00.346) 0:03:34.727 ****** 2025-10-09 10:50:43.461284 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.461294 | orchestrator | 2025-10-09 10:50:43.461305 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-10-09 10:50:43.461323 | orchestrator | Thursday 09 October 2025 10:49:24 +0000 (0:00:02.113) 0:03:36.841 ****** 2025-10-09 10:50:43.461335 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.461346 | orchestrator | 2025-10-09 10:50:43.461356 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-10-09 10:50:43.461367 | orchestrator | Thursday 09 October 2025 10:49:26 +0000 (0:00:02.080) 0:03:38.922 ****** 2025-10-09 10:50:43.461378 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.461389 | orchestrator | 2025-10-09 10:50:43.461400 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-10-09 10:50:43.461411 | orchestrator | Thursday 09 October 2025 10:49:28 +0000 (0:00:02.156) 0:03:41.078 ****** 2025-10-09 10:50:43.461422 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.461433 | orchestrator | 2025-10-09 10:50:43.461444 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-10-09 10:50:43.461455 | orchestrator | Thursday 09 October 2025 10:49:31 +0000 (0:00:02.730) 0:03:43.809 ****** 2025-10-09 10:50:43.461466 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.461476 | orchestrator | 2025-10-09 10:50:43.461487 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-10-09 10:50:43.461503 | orchestrator | Thursday 09 October 2025 10:49:53 +0000 (0:00:21.893) 0:04:05.703 ****** 2025-10-09 10:50:43.461514 | orchestrator | 2025-10-09 10:50:43.461525 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-10-09 10:50:43.461536 | orchestrator | Thursday 09 October 2025 10:49:53 +0000 (0:00:00.070) 0:04:05.774 ****** 2025-10-09 10:50:43.461547 | orchestrator | 2025-10-09 10:50:43.461558 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-10-09 10:50:43.461569 | orchestrator | Thursday 09 October 2025 10:49:53 +0000 (0:00:00.069) 0:04:05.843 ****** 2025-10-09 10:50:43.461579 | orchestrator | 2025-10-09 10:50:43.461589 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-10-09 10:50:43.461598 | orchestrator | Thursday 09 October 2025 10:49:53 +0000 (0:00:00.070) 0:04:05.913 ****** 2025-10-09 10:50:43.461608 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.461618 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:50:43.461627 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:50:43.461637 | orchestrator | 2025-10-09 10:50:43.461647 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-10-09 10:50:43.461656 | orchestrator | Thursday 09 October 2025 10:50:10 +0000 (0:00:16.976) 0:04:22.890 ****** 2025-10-09 10:50:43.461666 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.461676 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:50:43.461686 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:50:43.461695 | orchestrator | 2025-10-09 10:50:43.461705 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-10-09 10:50:43.461714 | orchestrator | Thursday 09 October 2025 10:50:18 +0000 (0:00:07.699) 0:04:30.590 ****** 2025-10-09 10:50:43.461724 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.461734 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:50:43.461744 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:50:43.461753 | orchestrator | 2025-10-09 10:50:43.461763 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-10-09 10:50:43.461773 | orchestrator | Thursday 09 October 2025 10:50:24 +0000 (0:00:05.902) 0:04:36.492 ****** 2025-10-09 10:50:43.461782 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.461792 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:50:43.461802 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:50:43.461811 | orchestrator | 2025-10-09 10:50:43.461821 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-10-09 10:50:43.461831 | orchestrator | Thursday 09 October 2025 10:50:34 +0000 (0:00:10.742) 0:04:47.235 ****** 2025-10-09 10:50:43.461840 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:50:43.461850 | orchestrator | changed: [testbed-node-2] 2025-10-09 10:50:43.461865 | orchestrator | changed: [testbed-node-1] 2025-10-09 10:50:43.461875 | orchestrator | 2025-10-09 10:50:43.461884 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:50:43.461895 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-10-09 10:50:43.461905 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:50:43.461915 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-10-09 10:50:43.461925 | orchestrator | 2025-10-09 10:50:43.461934 | orchestrator | 2025-10-09 10:50:43.461944 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:50:43.461954 | orchestrator | Thursday 09 October 2025 10:50:40 +0000 (0:00:06.070) 0:04:53.305 ****** 2025-10-09 10:50:43.462012 | orchestrator | =============================================================================== 2025-10-09 10:50:43.462053 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.89s 2025-10-09 10:50:43.462065 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.27s 2025-10-09 10:50:43.462093 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.21s 2025-10-09 10:50:43.462103 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.98s 2025-10-09 10:50:43.462113 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.93s 2025-10-09 10:50:43.462123 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.74s 2025-10-09 10:50:43.462133 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.27s 2025-10-09 10:50:43.462142 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.71s 2025-10-09 10:50:43.462152 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.85s 2025-10-09 10:50:43.462162 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 7.70s 2025-10-09 10:50:43.462172 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.35s 2025-10-09 10:50:43.462182 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.85s 2025-10-09 10:50:43.462192 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 6.07s 2025-10-09 10:50:43.462202 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.90s 2025-10-09 10:50:43.462211 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.70s 2025-10-09 10:50:43.462221 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.65s 2025-10-09 10:50:43.462231 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.60s 2025-10-09 10:50:43.462241 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.52s 2025-10-09 10:50:43.462251 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.47s 2025-10-09 10:50:43.462266 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.46s 2025-10-09 10:50:43.462276 | orchestrator | 2025-10-09 10:50:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:50:46.496168 | orchestrator | 2025-10-09 10:50:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:50:49.538089 | orchestrator | 2025-10-09 10:50:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:50:52.583416 | orchestrator | 2025-10-09 10:50:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:50:55.628580 | orchestrator | 2025-10-09 10:50:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:50:58.679034 | orchestrator | 2025-10-09 10:50:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:51:01.718495 | orchestrator | 2025-10-09 10:51:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:51:04.768575 | orchestrator | 2025-10-09 10:51:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:51:07.818864 | orchestrator | 2025-10-09 10:51:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:51:10.859278 | orchestrator | 2025-10-09 10:51:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:51:13.891669 | orchestrator | 2025-10-09 10:51:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:51:16.931372 | orchestrator | 2025-10-09 10:51:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:51:19.974232 | orchestrator | 2025-10-09 10:51:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:51:23.028350 | orchestrator | 2025-10-09 10:51:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:51:26.068707 | orchestrator | 2025-10-09 10:51:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:51:29.108657 | orchestrator | 2025-10-09 10:51:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:51:32.156991 | orchestrator | 2025-10-09 10:51:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:51:35.193521 | orchestrator | 2025-10-09 10:51:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:51:38.246213 | orchestrator | 2025-10-09 10:51:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:51:41.278541 | orchestrator | 2025-10-09 10:51:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-10-09 10:51:44.324346 | orchestrator | 2025-10-09 10:51:44.686658 | orchestrator | 2025-10-09 10:51:44.693234 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Thu Oct 9 10:51:44 UTC 2025 2025-10-09 10:51:44.693264 | orchestrator | 2025-10-09 10:51:45.086614 | orchestrator | ok: Runtime: 0:36:12.671977 2025-10-09 10:51:45.359180 | 2025-10-09 10:51:45.359312 | TASK [Bootstrap services] 2025-10-09 10:51:46.122400 | orchestrator | 2025-10-09 10:51:46.122584 | orchestrator | # BOOTSTRAP 2025-10-09 10:51:46.122605 | orchestrator | 2025-10-09 10:51:46.122619 | orchestrator | + set -e 2025-10-09 10:51:46.122632 | orchestrator | + echo 2025-10-09 10:51:46.122645 | orchestrator | + echo '# BOOTSTRAP' 2025-10-09 10:51:46.122662 | orchestrator | + echo 2025-10-09 10:51:46.122705 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-10-09 10:51:46.132372 | orchestrator | + set -e 2025-10-09 10:51:46.132401 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-10-09 10:51:51.651766 | orchestrator | 2025-10-09 10:51:51 | INFO  | It takes a moment until task 49d70f16-316b-4ee5-b223-e06f4dbd6a1e (flavor-manager) has been started and output is visible here. 2025-10-09 10:51:59.889847 | orchestrator | 2025-10-09 10:51:54 | INFO  | Flavor SCS-1L-1 created 2025-10-09 10:51:59.889985 | orchestrator | 2025-10-09 10:51:55 | INFO  | Flavor SCS-1L-1-5 created 2025-10-09 10:51:59.890011 | orchestrator | 2025-10-09 10:51:55 | INFO  | Flavor SCS-1V-2 created 2025-10-09 10:51:59.890107 | orchestrator | 2025-10-09 10:51:55 | INFO  | Flavor SCS-1V-2-5 created 2025-10-09 10:51:59.890120 | orchestrator | 2025-10-09 10:51:55 | INFO  | Flavor SCS-1V-4 created 2025-10-09 10:51:59.890132 | orchestrator | 2025-10-09 10:51:55 | INFO  | Flavor SCS-1V-4-10 created 2025-10-09 10:51:59.890143 | orchestrator | 2025-10-09 10:51:56 | INFO  | Flavor SCS-1V-8 created 2025-10-09 10:51:59.890156 | orchestrator | 2025-10-09 10:51:56 | INFO  | Flavor SCS-1V-8-20 created 2025-10-09 10:51:59.890186 | orchestrator | 2025-10-09 10:51:56 | INFO  | Flavor SCS-2V-4 created 2025-10-09 10:51:59.890197 | orchestrator | 2025-10-09 10:51:56 | INFO  | Flavor SCS-2V-4-10 created 2025-10-09 10:51:59.890209 | orchestrator | 2025-10-09 10:51:56 | INFO  | Flavor SCS-2V-8 created 2025-10-09 10:51:59.890220 | orchestrator | 2025-10-09 10:51:56 | INFO  | Flavor SCS-2V-8-20 created 2025-10-09 10:51:59.890230 | orchestrator | 2025-10-09 10:51:56 | INFO  | Flavor SCS-2V-16 created 2025-10-09 10:51:59.890241 | orchestrator | 2025-10-09 10:51:57 | INFO  | Flavor SCS-2V-16-50 created 2025-10-09 10:51:59.890252 | orchestrator | 2025-10-09 10:51:57 | INFO  | Flavor SCS-4V-8 created 2025-10-09 10:51:59.890263 | orchestrator | 2025-10-09 10:51:57 | INFO  | Flavor SCS-4V-8-20 created 2025-10-09 10:51:59.890274 | orchestrator | 2025-10-09 10:51:57 | INFO  | Flavor SCS-4V-16 created 2025-10-09 10:51:59.890285 | orchestrator | 2025-10-09 10:51:57 | INFO  | Flavor SCS-4V-16-50 created 2025-10-09 10:51:59.890296 | orchestrator | 2025-10-09 10:51:58 | INFO  | Flavor SCS-4V-32 created 2025-10-09 10:51:59.890307 | orchestrator | 2025-10-09 10:51:58 | INFO  | Flavor SCS-4V-32-100 created 2025-10-09 10:51:59.890318 | orchestrator | 2025-10-09 10:51:58 | INFO  | Flavor SCS-8V-16 created 2025-10-09 10:51:59.890329 | orchestrator | 2025-10-09 10:51:58 | INFO  | Flavor SCS-8V-16-50 created 2025-10-09 10:51:59.890340 | orchestrator | 2025-10-09 10:51:58 | INFO  | Flavor SCS-8V-32 created 2025-10-09 10:51:59.890351 | orchestrator | 2025-10-09 10:51:58 | INFO  | Flavor SCS-8V-32-100 created 2025-10-09 10:51:59.890361 | orchestrator | 2025-10-09 10:51:58 | INFO  | Flavor SCS-16V-32 created 2025-10-09 10:51:59.890372 | orchestrator | 2025-10-09 10:51:59 | INFO  | Flavor SCS-16V-32-100 created 2025-10-09 10:51:59.890383 | orchestrator | 2025-10-09 10:51:59 | INFO  | Flavor SCS-2V-4-20s created 2025-10-09 10:51:59.890394 | orchestrator | 2025-10-09 10:51:59 | INFO  | Flavor SCS-4V-8-50s created 2025-10-09 10:51:59.890405 | orchestrator | 2025-10-09 10:51:59 | INFO  | Flavor SCS-8V-32-100s created 2025-10-09 10:52:02.264004 | orchestrator | 2025-10-09 10:52:02 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-10-09 10:52:12.432228 | orchestrator | 2025-10-09 10:52:12 | INFO  | Task 98e96fa8-2c3a-44cc-9158-aabedc3a9a9d (bootstrap-basic) was prepared for execution. 2025-10-09 10:52:12.432461 | orchestrator | 2025-10-09 10:52:12 | INFO  | It takes a moment until task 98e96fa8-2c3a-44cc-9158-aabedc3a9a9d (bootstrap-basic) has been started and output is visible here. 2025-10-09 10:53:17.322652 | orchestrator | 2025-10-09 10:53:17.322743 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-10-09 10:53:17.322759 | orchestrator | 2025-10-09 10:53:17.322771 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-10-09 10:53:17.322782 | orchestrator | Thursday 09 October 2025 10:52:17 +0000 (0:00:00.072) 0:00:00.072 ****** 2025-10-09 10:53:17.322794 | orchestrator | ok: [localhost] 2025-10-09 10:53:17.322805 | orchestrator | 2025-10-09 10:53:17.322816 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-10-09 10:53:17.322827 | orchestrator | Thursday 09 October 2025 10:52:19 +0000 (0:00:02.171) 0:00:02.244 ****** 2025-10-09 10:53:17.322837 | orchestrator | ok: [localhost] 2025-10-09 10:53:17.322848 | orchestrator | 2025-10-09 10:53:17.322859 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-10-09 10:53:17.322870 | orchestrator | Thursday 09 October 2025 10:52:29 +0000 (0:00:10.386) 0:00:12.631 ****** 2025-10-09 10:53:17.322881 | orchestrator | changed: [localhost] 2025-10-09 10:53:17.322892 | orchestrator | 2025-10-09 10:53:17.322903 | orchestrator | TASK [Get volume type local] *************************************************** 2025-10-09 10:53:17.322914 | orchestrator | Thursday 09 October 2025 10:52:38 +0000 (0:00:08.202) 0:00:20.833 ****** 2025-10-09 10:53:17.322925 | orchestrator | ok: [localhost] 2025-10-09 10:53:17.322936 | orchestrator | 2025-10-09 10:53:17.322947 | orchestrator | TASK [Create volume type local] ************************************************ 2025-10-09 10:53:17.322957 | orchestrator | Thursday 09 October 2025 10:52:45 +0000 (0:00:07.316) 0:00:28.150 ****** 2025-10-09 10:53:17.322972 | orchestrator | changed: [localhost] 2025-10-09 10:53:17.322984 | orchestrator | 2025-10-09 10:53:17.322995 | orchestrator | TASK [Create public network] *************************************************** 2025-10-09 10:53:17.323005 | orchestrator | Thursday 09 October 2025 10:52:52 +0000 (0:00:07.416) 0:00:35.567 ****** 2025-10-09 10:53:17.323016 | orchestrator | changed: [localhost] 2025-10-09 10:53:17.323027 | orchestrator | 2025-10-09 10:53:17.323058 | orchestrator | TASK [Set public network to default] ******************************************* 2025-10-09 10:53:17.323071 | orchestrator | Thursday 09 October 2025 10:52:58 +0000 (0:00:05.394) 0:00:40.961 ****** 2025-10-09 10:53:17.323081 | orchestrator | changed: [localhost] 2025-10-09 10:53:17.323092 | orchestrator | 2025-10-09 10:53:17.323103 | orchestrator | TASK [Create public subnet] **************************************************** 2025-10-09 10:53:17.323122 | orchestrator | Thursday 09 October 2025 10:53:04 +0000 (0:00:06.577) 0:00:47.539 ****** 2025-10-09 10:53:17.323133 | orchestrator | changed: [localhost] 2025-10-09 10:53:17.323144 | orchestrator | 2025-10-09 10:53:17.323155 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-10-09 10:53:17.323166 | orchestrator | Thursday 09 October 2025 10:53:09 +0000 (0:00:04.783) 0:00:52.322 ****** 2025-10-09 10:53:17.323177 | orchestrator | changed: [localhost] 2025-10-09 10:53:17.323187 | orchestrator | 2025-10-09 10:53:17.323199 | orchestrator | TASK [Create manager role] ***************************************************** 2025-10-09 10:53:17.323212 | orchestrator | Thursday 09 October 2025 10:53:13 +0000 (0:00:03.862) 0:00:56.184 ****** 2025-10-09 10:53:17.323224 | orchestrator | ok: [localhost] 2025-10-09 10:53:17.323236 | orchestrator | 2025-10-09 10:53:17.323248 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:53:17.323261 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 10:53:17.323274 | orchestrator | 2025-10-09 10:53:17.323285 | orchestrator | 2025-10-09 10:53:17.323298 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:53:17.323332 | orchestrator | Thursday 09 October 2025 10:53:17 +0000 (0:00:03.627) 0:00:59.812 ****** 2025-10-09 10:53:17.323344 | orchestrator | =============================================================================== 2025-10-09 10:53:17.323356 | orchestrator | Get volume type LUKS --------------------------------------------------- 10.39s 2025-10-09 10:53:17.323368 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.20s 2025-10-09 10:53:17.323380 | orchestrator | Create volume type local ------------------------------------------------ 7.42s 2025-10-09 10:53:17.323392 | orchestrator | Get volume type local --------------------------------------------------- 7.32s 2025-10-09 10:53:17.323404 | orchestrator | Set public network to default ------------------------------------------- 6.58s 2025-10-09 10:53:17.323417 | orchestrator | Create public network --------------------------------------------------- 5.39s 2025-10-09 10:53:17.323428 | orchestrator | Create public subnet ---------------------------------------------------- 4.78s 2025-10-09 10:53:17.323440 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.86s 2025-10-09 10:53:17.323452 | orchestrator | Create manager role ----------------------------------------------------- 3.63s 2025-10-09 10:53:17.323464 | orchestrator | Gathering Facts --------------------------------------------------------- 2.17s 2025-10-09 10:53:19.921861 | orchestrator | 2025-10-09 10:53:19 | INFO  | It takes a moment until task a7b6fc31-1f31-4334-aa40-25ee034ea779 (image-manager) has been started and output is visible here. 2025-10-09 10:54:01.863329 | orchestrator | 2025-10-09 10:53:22 | INFO  | Processing image 'Cirros 0.6.2' 2025-10-09 10:54:01.863444 | orchestrator | 2025-10-09 10:53:23 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-10-09 10:54:01.863464 | orchestrator | 2025-10-09 10:53:23 | INFO  | Importing image Cirros 0.6.2 2025-10-09 10:54:01.863476 | orchestrator | 2025-10-09 10:53:23 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-10-09 10:54:01.863488 | orchestrator | 2025-10-09 10:53:25 | INFO  | Waiting for image to leave queued state... 2025-10-09 10:54:01.863500 | orchestrator | 2025-10-09 10:53:27 | INFO  | Waiting for import to complete... 2025-10-09 10:54:01.863511 | orchestrator | 2025-10-09 10:53:37 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-10-09 10:54:01.863522 | orchestrator | 2025-10-09 10:53:37 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-10-09 10:54:01.863533 | orchestrator | 2025-10-09 10:53:37 | INFO  | Setting internal_version = 0.6.2 2025-10-09 10:54:01.863544 | orchestrator | 2025-10-09 10:53:37 | INFO  | Setting image_original_user = cirros 2025-10-09 10:54:01.863555 | orchestrator | 2025-10-09 10:53:37 | INFO  | Adding tag os:cirros 2025-10-09 10:54:01.863567 | orchestrator | 2025-10-09 10:53:37 | INFO  | Setting property architecture: x86_64 2025-10-09 10:54:01.863578 | orchestrator | 2025-10-09 10:53:38 | INFO  | Setting property hw_disk_bus: scsi 2025-10-09 10:54:01.863588 | orchestrator | 2025-10-09 10:53:38 | INFO  | Setting property hw_rng_model: virtio 2025-10-09 10:54:01.863599 | orchestrator | 2025-10-09 10:53:38 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-10-09 10:54:01.863610 | orchestrator | 2025-10-09 10:53:38 | INFO  | Setting property hw_watchdog_action: reset 2025-10-09 10:54:01.863620 | orchestrator | 2025-10-09 10:53:39 | INFO  | Setting property hypervisor_type: qemu 2025-10-09 10:54:01.863631 | orchestrator | 2025-10-09 10:53:39 | INFO  | Setting property os_distro: cirros 2025-10-09 10:54:01.863642 | orchestrator | 2025-10-09 10:53:39 | INFO  | Setting property os_purpose: minimal 2025-10-09 10:54:01.863652 | orchestrator | 2025-10-09 10:53:39 | INFO  | Setting property replace_frequency: never 2025-10-09 10:54:01.863685 | orchestrator | 2025-10-09 10:53:39 | INFO  | Setting property uuid_validity: none 2025-10-09 10:54:01.863697 | orchestrator | 2025-10-09 10:53:40 | INFO  | Setting property provided_until: none 2025-10-09 10:54:01.863717 | orchestrator | 2025-10-09 10:53:40 | INFO  | Setting property image_description: Cirros 2025-10-09 10:54:01.863733 | orchestrator | 2025-10-09 10:53:40 | INFO  | Setting property image_name: Cirros 2025-10-09 10:54:01.863744 | orchestrator | 2025-10-09 10:53:40 | INFO  | Setting property internal_version: 0.6.2 2025-10-09 10:54:01.863754 | orchestrator | 2025-10-09 10:53:40 | INFO  | Setting property image_original_user: cirros 2025-10-09 10:54:01.863765 | orchestrator | 2025-10-09 10:53:41 | INFO  | Setting property os_version: 0.6.2 2025-10-09 10:54:01.863776 | orchestrator | 2025-10-09 10:53:41 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-10-09 10:54:01.863789 | orchestrator | 2025-10-09 10:53:41 | INFO  | Setting property image_build_date: 2023-05-30 2025-10-09 10:54:01.863799 | orchestrator | 2025-10-09 10:53:42 | INFO  | Checking status of 'Cirros 0.6.2' 2025-10-09 10:54:01.863810 | orchestrator | 2025-10-09 10:53:42 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-10-09 10:54:01.863821 | orchestrator | 2025-10-09 10:53:42 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-10-09 10:54:01.863831 | orchestrator | 2025-10-09 10:53:42 | INFO  | Processing image 'Cirros 0.6.3' 2025-10-09 10:54:01.863842 | orchestrator | 2025-10-09 10:53:42 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-10-09 10:54:01.863853 | orchestrator | 2025-10-09 10:53:42 | INFO  | Importing image Cirros 0.6.3 2025-10-09 10:54:01.863864 | orchestrator | 2025-10-09 10:53:42 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-10-09 10:54:01.863875 | orchestrator | 2025-10-09 10:53:43 | INFO  | Waiting for image to leave queued state... 2025-10-09 10:54:01.863885 | orchestrator | 2025-10-09 10:53:45 | INFO  | Waiting for import to complete... 2025-10-09 10:54:01.863913 | orchestrator | 2025-10-09 10:53:56 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-10-09 10:54:01.863925 | orchestrator | 2025-10-09 10:53:56 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-10-09 10:54:01.863936 | orchestrator | 2025-10-09 10:53:56 | INFO  | Setting internal_version = 0.6.3 2025-10-09 10:54:01.863946 | orchestrator | 2025-10-09 10:53:56 | INFO  | Setting image_original_user = cirros 2025-10-09 10:54:01.863957 | orchestrator | 2025-10-09 10:53:56 | INFO  | Adding tag os:cirros 2025-10-09 10:54:01.863968 | orchestrator | 2025-10-09 10:53:56 | INFO  | Setting property architecture: x86_64 2025-10-09 10:54:01.863979 | orchestrator | 2025-10-09 10:53:57 | INFO  | Setting property hw_disk_bus: scsi 2025-10-09 10:54:01.863990 | orchestrator | 2025-10-09 10:53:57 | INFO  | Setting property hw_rng_model: virtio 2025-10-09 10:54:01.864001 | orchestrator | 2025-10-09 10:53:57 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-10-09 10:54:01.864011 | orchestrator | 2025-10-09 10:53:57 | INFO  | Setting property hw_watchdog_action: reset 2025-10-09 10:54:01.864022 | orchestrator | 2025-10-09 10:53:58 | INFO  | Setting property hypervisor_type: qemu 2025-10-09 10:54:01.864059 | orchestrator | 2025-10-09 10:53:58 | INFO  | Setting property os_distro: cirros 2025-10-09 10:54:01.864078 | orchestrator | 2025-10-09 10:53:58 | INFO  | Setting property os_purpose: minimal 2025-10-09 10:54:01.864089 | orchestrator | 2025-10-09 10:53:58 | INFO  | Setting property replace_frequency: never 2025-10-09 10:54:01.864100 | orchestrator | 2025-10-09 10:53:58 | INFO  | Setting property uuid_validity: none 2025-10-09 10:54:01.864111 | orchestrator | 2025-10-09 10:53:59 | INFO  | Setting property provided_until: none 2025-10-09 10:54:01.864121 | orchestrator | 2025-10-09 10:53:59 | INFO  | Setting property image_description: Cirros 2025-10-09 10:54:01.864132 | orchestrator | 2025-10-09 10:53:59 | INFO  | Setting property image_name: Cirros 2025-10-09 10:54:01.864142 | orchestrator | 2025-10-09 10:53:59 | INFO  | Setting property internal_version: 0.6.3 2025-10-09 10:54:01.864153 | orchestrator | 2025-10-09 10:53:59 | INFO  | Setting property image_original_user: cirros 2025-10-09 10:54:01.864164 | orchestrator | 2025-10-09 10:54:00 | INFO  | Setting property os_version: 0.6.3 2025-10-09 10:54:01.864175 | orchestrator | 2025-10-09 10:54:00 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-10-09 10:54:01.864186 | orchestrator | 2025-10-09 10:54:00 | INFO  | Setting property image_build_date: 2024-09-26 2025-10-09 10:54:01.864201 | orchestrator | 2025-10-09 10:54:00 | INFO  | Checking status of 'Cirros 0.6.3' 2025-10-09 10:54:01.864212 | orchestrator | 2025-10-09 10:54:00 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-10-09 10:54:01.864223 | orchestrator | 2025-10-09 10:54:00 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-10-09 10:54:02.232118 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-10-09 10:54:04.504374 | orchestrator | 2025-10-09 10:54:04 | INFO  | date: 2025-10-09 2025-10-09 10:54:04.504476 | orchestrator | 2025-10-09 10:54:04 | INFO  | image: octavia-amphora-haproxy-2024.2.20251009.qcow2 2025-10-09 10:54:04.504495 | orchestrator | 2025-10-09 10:54:04 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251009.qcow2 2025-10-09 10:54:04.504530 | orchestrator | 2025-10-09 10:54:04 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251009.qcow2.CHECKSUM 2025-10-09 10:54:04.539486 | orchestrator | 2025-10-09 10:54:04 | INFO  | checksum: a6fe8b4f836532cd1ebf8aa04ddce92c8b8a74168572318bf2952019682a3f85 2025-10-09 10:54:04.640745 | orchestrator | 2025-10-09 10:54:04 | INFO  | It takes a moment until task ace359b3-af0a-47a4-9f89-6b22bace12c4 (image-manager) has been started and output is visible here. 2025-10-09 10:55:06.822297 | orchestrator | 2025-10-09 10:54:06 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-10-09' 2025-10-09 10:55:06.822390 | orchestrator | 2025-10-09 10:54:06 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251009.qcow2: 200 2025-10-09 10:55:06.822410 | orchestrator | 2025-10-09 10:54:06 | INFO  | Importing image OpenStack Octavia Amphora 2025-10-09 2025-10-09 10:55:06.822422 | orchestrator | 2025-10-09 10:54:06 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251009.qcow2 2025-10-09 10:55:06.822434 | orchestrator | 2025-10-09 10:54:08 | INFO  | Waiting for image to leave queued state... 2025-10-09 10:55:06.822445 | orchestrator | 2025-10-09 10:54:10 | INFO  | Waiting for import to complete... 2025-10-09 10:55:06.822476 | orchestrator | 2025-10-09 10:54:20 | INFO  | Waiting for import to complete... 2025-10-09 10:55:06.822487 | orchestrator | 2025-10-09 10:54:30 | INFO  | Waiting for import to complete... 2025-10-09 10:55:06.822498 | orchestrator | 2025-10-09 10:54:40 | INFO  | Waiting for import to complete... 2025-10-09 10:55:06.822509 | orchestrator | 2025-10-09 10:54:50 | INFO  | Waiting for import to complete... 2025-10-09 10:55:06.822520 | orchestrator | 2025-10-09 10:55:01 | INFO  | Import of 'OpenStack Octavia Amphora 2025-10-09' successfully completed, reloading images 2025-10-09 10:55:06.822531 | orchestrator | 2025-10-09 10:55:01 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-10-09' 2025-10-09 10:55:06.822542 | orchestrator | 2025-10-09 10:55:01 | INFO  | Setting internal_version = 2025-10-09 2025-10-09 10:55:06.822553 | orchestrator | 2025-10-09 10:55:01 | INFO  | Setting image_original_user = ubuntu 2025-10-09 10:55:06.822564 | orchestrator | 2025-10-09 10:55:01 | INFO  | Adding tag amphora 2025-10-09 10:55:06.822575 | orchestrator | 2025-10-09 10:55:01 | INFO  | Adding tag os:ubuntu 2025-10-09 10:55:06.822585 | orchestrator | 2025-10-09 10:55:02 | INFO  | Setting property architecture: x86_64 2025-10-09 10:55:06.822596 | orchestrator | 2025-10-09 10:55:02 | INFO  | Setting property hw_disk_bus: scsi 2025-10-09 10:55:06.822606 | orchestrator | 2025-10-09 10:55:02 | INFO  | Setting property hw_rng_model: virtio 2025-10-09 10:55:06.822617 | orchestrator | 2025-10-09 10:55:02 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-10-09 10:55:06.822641 | orchestrator | 2025-10-09 10:55:03 | INFO  | Setting property hw_watchdog_action: reset 2025-10-09 10:55:06.822652 | orchestrator | 2025-10-09 10:55:03 | INFO  | Setting property hypervisor_type: qemu 2025-10-09 10:55:06.822663 | orchestrator | 2025-10-09 10:55:03 | INFO  | Setting property os_distro: ubuntu 2025-10-09 10:55:06.822674 | orchestrator | 2025-10-09 10:55:03 | INFO  | Setting property replace_frequency: quarterly 2025-10-09 10:55:06.822684 | orchestrator | 2025-10-09 10:55:03 | INFO  | Setting property uuid_validity: last-1 2025-10-09 10:55:06.822695 | orchestrator | 2025-10-09 10:55:04 | INFO  | Setting property provided_until: none 2025-10-09 10:55:06.822706 | orchestrator | 2025-10-09 10:55:04 | INFO  | Setting property os_purpose: network 2025-10-09 10:55:06.822716 | orchestrator | 2025-10-09 10:55:04 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-10-09 10:55:06.822727 | orchestrator | 2025-10-09 10:55:04 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-10-09 10:55:06.822738 | orchestrator | 2025-10-09 10:55:05 | INFO  | Setting property internal_version: 2025-10-09 2025-10-09 10:55:06.822749 | orchestrator | 2025-10-09 10:55:05 | INFO  | Setting property image_original_user: ubuntu 2025-10-09 10:55:06.822760 | orchestrator | 2025-10-09 10:55:05 | INFO  | Setting property os_version: 2025-10-09 2025-10-09 10:55:06.822771 | orchestrator | 2025-10-09 10:55:05 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251009.qcow2 2025-10-09 10:55:06.822782 | orchestrator | 2025-10-09 10:55:06 | INFO  | Setting property image_build_date: 2025-10-09 2025-10-09 10:55:06.822793 | orchestrator | 2025-10-09 10:55:06 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-10-09' 2025-10-09 10:55:06.822804 | orchestrator | 2025-10-09 10:55:06 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-10-09' 2025-10-09 10:55:06.822838 | orchestrator | 2025-10-09 10:55:06 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-10-09 10:55:06.822851 | orchestrator | 2025-10-09 10:55:06 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-10-09 10:55:06.822864 | orchestrator | 2025-10-09 10:55:06 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-10-09 10:55:06.822877 | orchestrator | 2025-10-09 10:55:06 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-10-09 10:55:07.517562 | orchestrator | ok: Runtime: 0:03:21.470950 2025-10-09 10:55:07.541326 | 2025-10-09 10:55:07.541472 | TASK [Run checks] 2025-10-09 10:55:08.191474 | orchestrator | + set -e 2025-10-09 10:55:08.191644 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-10-09 10:55:08.191668 | orchestrator | ++ export INTERACTIVE=false 2025-10-09 10:55:08.191690 | orchestrator | ++ INTERACTIVE=false 2025-10-09 10:55:08.191704 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-10-09 10:55:08.191717 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-10-09 10:55:08.192325 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-10-09 10:55:08.193097 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-10-09 10:55:08.196593 | orchestrator | ++ export MANAGER_VERSION=latest 2025-10-09 10:55:08.196660 | orchestrator | ++ MANAGER_VERSION=latest 2025-10-09 10:55:08.196681 | orchestrator | 2025-10-09 10:55:08.196691 | orchestrator | # CHECK 2025-10-09 10:55:08.196700 | orchestrator | 2025-10-09 10:55:08.196708 | orchestrator | + echo 2025-10-09 10:55:08.196724 | orchestrator | + echo '# CHECK' 2025-10-09 10:55:08.196733 | orchestrator | + echo 2025-10-09 10:55:08.196744 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-10-09 10:55:08.197223 | orchestrator | ++ semver latest 5.0.0 2025-10-09 10:55:08.262154 | orchestrator | 2025-10-09 10:55:08.262227 | orchestrator | ## Containers @ testbed-manager 2025-10-09 10:55:08.262239 | orchestrator | 2025-10-09 10:55:08.262249 | orchestrator | + [[ -1 -eq -1 ]] 2025-10-09 10:55:08.262257 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-10-09 10:55:08.262266 | orchestrator | + echo 2025-10-09 10:55:08.262274 | orchestrator | + echo '## Containers @ testbed-manager' 2025-10-09 10:55:08.262283 | orchestrator | + echo 2025-10-09 10:55:08.262291 | orchestrator | + osism container testbed-manager ps 2025-10-09 10:55:10.681858 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-10-09 10:55:10.681960 | orchestrator | c82b94bf5f50 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 14 minutes prometheus_blackbox_exporter 2025-10-09 10:55:10.682078 | orchestrator | e7a49286fd86 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_alertmanager 2025-10-09 10:55:10.682098 | orchestrator | ae5b0ccd01f0 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-10-09 10:55:10.682117 | orchestrator | 75cd3b5d1f10 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-10-09 10:55:10.682129 | orchestrator | cbad591e218c registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_server 2025-10-09 10:55:10.682144 | orchestrator | 38a78057fd94 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 18 minutes ago Up 18 minutes cephclient 2025-10-09 10:55:10.682156 | orchestrator | 9a925a1f7149 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-10-09 10:55:10.682168 | orchestrator | b2aa4f9a0dc0 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-10-09 10:55:10.682180 | orchestrator | 21ed7ae742c7 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-10-09 10:55:10.682215 | orchestrator | 2a31a1eb97a1 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 32 minutes ago Up 32 minutes (healthy) 80/tcp phpmyadmin 2025-10-09 10:55:10.682227 | orchestrator | b3a0e1cfb559 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 33 minutes ago Up 33 minutes openstackclient 2025-10-09 10:55:10.682239 | orchestrator | c5c22f5c751c registry.osism.tech/osism/homer:v25.08.1 "/bin/sh /entrypoint…" 33 minutes ago Up 33 minutes (healthy) 8080/tcp homer 2025-10-09 10:55:10.682251 | orchestrator | fdcdba21c06f registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 57 minutes ago Up 56 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-10-09 10:55:10.682262 | orchestrator | 59200ce13557 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" About an hour ago Up 40 minutes (healthy) manager-inventory_reconciler-1 2025-10-09 10:55:10.682274 | orchestrator | b91e440780ce registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) osism-ansible 2025-10-09 10:55:10.682304 | orchestrator | 02d551261eb2 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) kolla-ansible 2025-10-09 10:55:10.682321 | orchestrator | 1926466bdf20 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) osism-kubernetes 2025-10-09 10:55:10.682333 | orchestrator | ebb57ae6e75d registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) ceph-ansible 2025-10-09 10:55:10.682344 | orchestrator | 60d8a06a2f01 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" About an hour ago Up 41 minutes (healthy) 8000/tcp manager-ara-server-1 2025-10-09 10:55:10.682356 | orchestrator | 3f14a7c4041b registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-10-09 10:55:10.682367 | orchestrator | 6fbc2850bc17 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-listener-1 2025-10-09 10:55:10.682378 | orchestrator | 4bad98d35055 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" About an hour ago Up 41 minutes (healthy) 3306/tcp manager-mariadb-1 2025-10-09 10:55:10.682390 | orchestrator | e8ce384555f7 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-openstack-1 2025-10-09 10:55:10.682408 | orchestrator | 878063fa7cad registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-beat-1 2025-10-09 10:55:10.682419 | orchestrator | 2b3336715490 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" About an hour ago Up 41 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2025-10-09 10:55:10.682431 | orchestrator | 36aea5c9dee9 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" About an hour ago Up 41 minutes (healthy) osismclient 2025-10-09 10:55:10.682442 | orchestrator | eca1c6a7baf4 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-flower-1 2025-10-09 10:55:10.682453 | orchestrator | d0f0e64e4cf2 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" About an hour ago Up 41 minutes (healthy) 6379/tcp manager-redis-1 2025-10-09 10:55:10.682465 | orchestrator | 4d3a1729b109 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-10-09 10:55:11.019694 | orchestrator | 2025-10-09 10:55:11.019753 | orchestrator | ## Images @ testbed-manager 2025-10-09 10:55:11.019762 | orchestrator | 2025-10-09 10:55:11.019768 | orchestrator | + echo 2025-10-09 10:55:11.019774 | orchestrator | + echo '## Images @ testbed-manager' 2025-10-09 10:55:11.019780 | orchestrator | + echo 2025-10-09 10:55:11.019785 | orchestrator | + osism container testbed-manager images 2025-10-09 10:55:13.406892 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-10-09 10:55:13.406967 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 7a8689ec37ba 8 hours ago 236MB 2025-10-09 10:55:13.406977 | orchestrator | registry.osism.tech/osism/cephclient reef 977ad5a29f81 8 hours ago 453MB 2025-10-09 10:55:13.406997 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 43b16b28e14b 9 hours ago 676MB 2025-10-09 10:55:13.407005 | orchestrator | registry.osism.tech/kolla/cron 2024.2 7859e6535a7c 9 hours ago 273MB 2025-10-09 10:55:13.407013 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 cef0cd0f728f 9 hours ago 586MB 2025-10-09 10:55:13.407039 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 e0d8690f2704 9 hours ago 313MB 2025-10-09 10:55:13.407046 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 91d48b5c0f84 9 hours ago 316MB 2025-10-09 10:55:13.407054 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 2bde128d371f 9 hours ago 411MB 2025-10-09 10:55:13.407061 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 c4fe2c2ce2db 9 hours ago 847MB 2025-10-09 10:55:13.407068 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b95198db2d64 9 hours ago 365MB 2025-10-09 10:55:13.407076 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 677718522839 11 hours ago 592MB 2025-10-09 10:55:13.407083 | orchestrator | registry.osism.tech/osism/osism-ansible latest ae64f913c91d 11 hours ago 596MB 2025-10-09 10:55:13.407091 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 8d6c4e363f3e 11 hours ago 545MB 2025-10-09 10:55:13.407098 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 06ad203baebe 11 hours ago 1.23GB 2025-10-09 10:55:13.407118 | orchestrator | registry.osism.tech/osism/osism latest e6deaa7d91be 11 hours ago 327MB 2025-10-09 10:55:13.407126 | orchestrator | registry.osism.tech/osism/osism-frontend latest 2791533a942b 11 hours ago 238MB 2025-10-09 10:55:13.407133 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 0707e6138df9 11 hours ago 322MB 2025-10-09 10:55:13.407140 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 18 hours ago 742MB 2025-10-09 10:55:13.407148 | orchestrator | registry.osism.tech/osism/homer v25.08.1 849a6c620511 12 days ago 11.5MB 2025-10-09 10:55:13.407155 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 6 weeks ago 275MB 2025-10-09 10:55:13.407162 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.3 885f31622e75 2 months ago 336MB 2025-10-09 10:55:13.407169 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 2 months ago 226MB 2025-10-09 10:55:13.407177 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine f218e591b571 3 months ago 41.4MB 2025-10-09 10:55:13.407184 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 16 months ago 146MB 2025-10-09 10:55:13.729424 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-10-09 10:55:13.729809 | orchestrator | ++ semver latest 5.0.0 2025-10-09 10:55:13.791452 | orchestrator | 2025-10-09 10:55:13.791486 | orchestrator | ## Containers @ testbed-node-0 2025-10-09 10:55:13.791498 | orchestrator | 2025-10-09 10:55:13.791510 | orchestrator | + [[ -1 -eq -1 ]] 2025-10-09 10:55:13.791521 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-10-09 10:55:13.791533 | orchestrator | + echo 2025-10-09 10:55:13.791544 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-10-09 10:55:13.791555 | orchestrator | + echo 2025-10-09 10:55:13.791566 | orchestrator | + osism container testbed-node-0 ps 2025-10-09 10:55:16.324688 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-10-09 10:55:16.324818 | orchestrator | 09acbd6a4669 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-10-09 10:55:16.324834 | orchestrator | 8a6c535d2e2c registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-10-09 10:55:16.324846 | orchestrator | 5f0cacf68ebe registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-10-09 10:55:16.324857 | orchestrator | 1595fba5117a registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-10-09 10:55:16.324868 | orchestrator | 9278cbe9ea68 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-10-09 10:55:16.324896 | orchestrator | fb1d87b51107 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-10-09 10:55:16.324907 | orchestrator | 79d7cb849f2b registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-10-09 10:55:16.324919 | orchestrator | c531762a7921 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-10-09 10:55:16.324930 | orchestrator | 9c0f7fcffabc registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-10-09 10:55:16.324958 | orchestrator | 96496ca5b0db registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-10-09 10:55:16.324970 | orchestrator | 28d33c6c2b27 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2025-10-09 10:55:16.324981 | orchestrator | 321c8590341f registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_conductor 2025-10-09 10:55:16.324992 | orchestrator | 16a06c59f44b registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2025-10-09 10:55:16.325003 | orchestrator | c2a1c2eb9e09 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2025-10-09 10:55:16.325034 | orchestrator | 4e772718a93c registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2025-10-09 10:55:16.325046 | orchestrator | b8b3c7628219 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2025-10-09 10:55:16.325057 | orchestrator | cbe0ca134e2b registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2025-10-09 10:55:16.325068 | orchestrator | 1bd2af07b267 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2025-10-09 10:55:16.325079 | orchestrator | db666f2e9537 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-10-09 10:55:16.325090 | orchestrator | 5534810d25b7 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_keystone_listener 2025-10-09 10:55:16.325101 | orchestrator | 59b94d398724 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) nova_api 2025-10-09 10:55:16.325127 | orchestrator | 99c8524d32e6 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2025-10-09 10:55:16.325139 | orchestrator | c70d57678e4d registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-10-09 10:55:16.325150 | orchestrator | b7baddaafcab registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-10-09 10:55:16.325166 | orchestrator | aa25c3fd8451 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) glance_api 2025-10-09 10:55:16.325178 | orchestrator | 0860579b8e73 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-10-09 10:55:16.325192 | orchestrator | 7313f116b44c registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2025-10-09 10:55:16.325204 | orchestrator | dcea0eddabaf registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-10-09 10:55:16.325215 | orchestrator | a07f260ac723 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-10-09 10:55:16.325232 | orchestrator | 7d9344d5d7b3 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_api 2025-10-09 10:55:16.325244 | orchestrator | c64e236b08d7 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-10-09 10:55:16.325255 | orchestrator | c8bee0baa794 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-0 2025-10-09 10:55:16.325266 | orchestrator | c7012549f7db registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-10-09 10:55:16.325277 | orchestrator | a0b793cba3c9 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-10-09 10:55:16.325288 | orchestrator | 18f6e9da90b6 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-10-09 10:55:16.325299 | orchestrator | 34165ff939cf registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2025-10-09 10:55:16.325310 | orchestrator | b2efb8ff15c0 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-10-09 10:55:16.325321 | orchestrator | b3b5188df2cf registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-10-09 10:55:16.325332 | orchestrator | 8f5ea39c2ea6 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-10-09 10:55:16.325343 | orchestrator | d54f3b6545e7 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-0 2025-10-09 10:55:16.325354 | orchestrator | 4bedc2e86989 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-10-09 10:55:16.325365 | orchestrator | 43c12508bb03 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-10-09 10:55:16.325376 | orchestrator | 32801c4d2ef3 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-10-09 10:55:16.325387 | orchestrator | ab4103de7749 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_northd 2025-10-09 10:55:16.325412 | orchestrator | 5ce875730726 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db 2025-10-09 10:55:16.325424 | orchestrator | 5c6b6a1fb3ef registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2025-10-09 10:55:16.325435 | orchestrator | ee731b6bd0ca registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-0 2025-10-09 10:55:16.325446 | orchestrator | 04b466644646 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2025-10-09 10:55:16.325463 | orchestrator | 4777b98779a3 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) rabbitmq 2025-10-09 10:55:16.325478 | orchestrator | 9a948fb8cda0 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-10-09 10:55:16.325489 | orchestrator | 03d0edc51f3d registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-10-09 10:55:16.325500 | orchestrator | 172a13a74a96 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2025-10-09 10:55:16.325511 | orchestrator | bc7cc60f5659 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis 2025-10-09 10:55:16.325522 | orchestrator | 8346ef1f9a86 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2025-10-09 10:55:16.325533 | orchestrator | ac2bee7cd300 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-10-09 10:55:16.325544 | orchestrator | 0bebc6f8335a registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2025-10-09 10:55:16.325555 | orchestrator | 966528ec9533 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-10-09 10:55:16.675471 | orchestrator | 2025-10-09 10:55:16.675540 | orchestrator | ## Images @ testbed-node-0 2025-10-09 10:55:16.675553 | orchestrator | 2025-10-09 10:55:16.675564 | orchestrator | + echo 2025-10-09 10:55:16.675576 | orchestrator | + echo '## Images @ testbed-node-0' 2025-10-09 10:55:16.675588 | orchestrator | + echo 2025-10-09 10:55:16.675599 | orchestrator | + osism container testbed-node-0 images 2025-10-09 10:55:19.191130 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-10-09 10:55:19.191270 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 210330bc6243 8 hours ago 1.27GB 2025-10-09 10:55:19.191295 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 9ad8b3dbdeaf 9 hours ago 1.01GB 2025-10-09 10:55:19.191307 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 f0f4570cb7df 9 hours ago 373MB 2025-10-09 10:55:19.191318 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 825f68120f76 9 hours ago 330MB 2025-10-09 10:55:19.191329 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 5763f1c53827 9 hours ago 274MB 2025-10-09 10:55:19.191341 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 bd79e8fe81c2 9 hours ago 1.52GB 2025-10-09 10:55:19.191352 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 8a60a267233e 9 hours ago 1.54GB 2025-10-09 10:55:19.191363 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 897775bd4fa9 9 hours ago 284MB 2025-10-09 10:55:19.191374 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 43b16b28e14b 9 hours ago 676MB 2025-10-09 10:55:19.191385 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 bbbe7ef89f66 9 hours ago 282MB 2025-10-09 10:55:19.191418 | orchestrator | registry.osism.tech/kolla/cron 2024.2 7859e6535a7c 9 hours ago 273MB 2025-10-09 10:55:19.191430 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 cef0cd0f728f 9 hours ago 586MB 2025-10-09 10:55:19.191441 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 1b3e8c0c1a21 9 hours ago 455MB 2025-10-09 10:55:19.191452 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 24aa4d4d2fe7 9 hours ago 308MB 2025-10-09 10:55:19.191488 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 e0d8690f2704 9 hours ago 313MB 2025-10-09 10:55:19.191500 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 106e41046118 9 hours ago 299MB 2025-10-09 10:55:19.191511 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 aef569508034 9 hours ago 306MB 2025-10-09 10:55:19.191522 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b95198db2d64 9 hours ago 365MB 2025-10-09 10:55:19.191533 | orchestrator | registry.osism.tech/kolla/redis 2024.2 36d3f82e78f4 9 hours ago 280MB 2025-10-09 10:55:19.191544 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 a9570a47c3dd 9 hours ago 280MB 2025-10-09 10:55:19.191554 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 cf77f19c55da 9 hours ago 289MB 2025-10-09 10:55:19.191565 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 d1cc862a21f7 9 hours ago 289MB 2025-10-09 10:55:19.191576 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 98cc9500ccdb 9 hours ago 1.15GB 2025-10-09 10:55:19.191587 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 3960064f2b5f 9 hours ago 297MB 2025-10-09 10:55:19.191598 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 96d4a0ac10d9 9 hours ago 297MB 2025-10-09 10:55:19.191611 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 355c058acc45 9 hours ago 297MB 2025-10-09 10:55:19.191624 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 86f05ce447ed 9 hours ago 297MB 2025-10-09 10:55:19.191636 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 a4a1f788882b 9 hours ago 1.06GB 2025-10-09 10:55:19.191648 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 881785b9a4bd 9 hours ago 1.06GB 2025-10-09 10:55:19.191660 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 92a7d76d345a 9 hours ago 1.04GB 2025-10-09 10:55:19.191673 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 a2fad00a75df 9 hours ago 1.04GB 2025-10-09 10:55:19.191685 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 4408b6d3c9a4 9 hours ago 1.04GB 2025-10-09 10:55:19.191697 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 b464a782f51b 9 hours ago 1.21GB 2025-10-09 10:55:19.191710 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 af9b579d305f 9 hours ago 1.37GB 2025-10-09 10:55:19.191722 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 72b25486e20f 9 hours ago 1.21GB 2025-10-09 10:55:19.191734 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 bf4c81d8ba64 9 hours ago 1.21GB 2025-10-09 10:55:19.191747 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 5c34537e1d20 9 hours ago 984MB 2025-10-09 10:55:19.191780 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 d03d33830042 9 hours ago 985MB 2025-10-09 10:55:19.191794 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 3a96a9a77bc6 9 hours ago 1.17GB 2025-10-09 10:55:19.191807 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 aa25d2883d52 9 hours ago 984MB 2025-10-09 10:55:19.191819 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 7d6cc28b5b2a 9 hours ago 1.11GB 2025-10-09 10:55:19.191832 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 628d0794a005 9 hours ago 998MB 2025-10-09 10:55:19.191844 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 8f915d64b487 9 hours ago 999MB 2025-10-09 10:55:19.191856 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 9f6d442e498f 9 hours ago 999MB 2025-10-09 10:55:19.191878 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 4c61b2962fd2 9 hours ago 982MB 2025-10-09 10:55:19.191890 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 0258dc186a7e 9 hours ago 981MB 2025-10-09 10:55:19.191903 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 49e1168c2b08 9 hours ago 982MB 2025-10-09 10:55:19.191915 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 749a5ce1fe11 9 hours ago 982MB 2025-10-09 10:55:19.191927 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 0e199e24f15d 9 hours ago 1.41GB 2025-10-09 10:55:19.191940 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 50f5ca66f125 9 hours ago 1.41GB 2025-10-09 10:55:19.191952 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 919dea3a2ebf 9 hours ago 1.05GB 2025-10-09 10:55:19.191965 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 297b8c278bd6 9 hours ago 1.09GB 2025-10-09 10:55:19.191975 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 fac16b67eba0 9 hours ago 1.05GB 2025-10-09 10:55:19.191986 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 2f267cd16f3d 9 hours ago 997MB 2025-10-09 10:55:19.191997 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 f2b899e0ad5e 9 hours ago 993MB 2025-10-09 10:55:19.192008 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 172e7a985d3e 9 hours ago 992MB 2025-10-09 10:55:19.192047 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 b6c138c5df69 9 hours ago 993MB 2025-10-09 10:55:19.192067 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 5b1031e006cf 9 hours ago 993MB 2025-10-09 10:55:19.192085 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 678229f2a04f 9 hours ago 997MB 2025-10-09 10:55:19.192103 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 06d7ec60b591 9 hours ago 1.06GB 2025-10-09 10:55:19.192121 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 3b6410bcdf26 9 hours ago 998MB 2025-10-09 10:55:19.192140 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 9de753a2ad43 9 hours ago 1.25GB 2025-10-09 10:55:19.192159 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 79b811cd316d 9 hours ago 1.14GB 2025-10-09 10:55:19.542530 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-10-09 10:55:19.543469 | orchestrator | ++ semver latest 5.0.0 2025-10-09 10:55:19.609425 | orchestrator | 2025-10-09 10:55:19.609478 | orchestrator | ## Containers @ testbed-node-1 2025-10-09 10:55:19.609491 | orchestrator | 2025-10-09 10:55:19.609502 | orchestrator | + [[ -1 -eq -1 ]] 2025-10-09 10:55:19.609514 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-10-09 10:55:19.609526 | orchestrator | + echo 2025-10-09 10:55:19.609537 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-10-09 10:55:19.609550 | orchestrator | + echo 2025-10-09 10:55:19.609561 | orchestrator | + osism container testbed-node-1 ps 2025-10-09 10:55:22.246667 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-10-09 10:55:22.246766 | orchestrator | 968c8b9e79a9 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-10-09 10:55:22.246801 | orchestrator | 8ab7d34f5530 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-10-09 10:55:22.246813 | orchestrator | 28dd81b3af44 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 5 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-10-09 10:55:22.246848 | orchestrator | 01faa1367e55 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-10-09 10:55:22.246859 | orchestrator | e6965af1be93 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-10-09 10:55:22.246869 | orchestrator | a5a31dec973b registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-10-09 10:55:22.246879 | orchestrator | 0c014f00a30d registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-10-09 10:55:22.246888 | orchestrator | cc2a4e2d8afb registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-10-09 10:55:22.246898 | orchestrator | 859e9610589a registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-10-09 10:55:22.246908 | orchestrator | dd124fa2b44c registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-10-09 10:55:22.246918 | orchestrator | 1229352779c8 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2025-10-09 10:55:22.246927 | orchestrator | e74a5a49ea36 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-10-09 10:55:22.246937 | orchestrator | f27fd024d683 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_conductor 2025-10-09 10:55:22.246951 | orchestrator | 665da1e2e819 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2025-10-09 10:55:22.246962 | orchestrator | 7510010bb421 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2025-10-09 10:55:22.246971 | orchestrator | ba786605ecf4 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2025-10-09 10:55:22.246981 | orchestrator | 7463a5193240 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2025-10-09 10:55:22.246991 | orchestrator | 47d8ba18b36a registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2025-10-09 10:55:22.247002 | orchestrator | b371d135acb1 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-10-09 10:55:22.247012 | orchestrator | 41920a714417 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_keystone_listener 2025-10-09 10:55:22.247056 | orchestrator | c16a291593de registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) nova_api 2025-10-09 10:55:22.247081 | orchestrator | ad76d1a6175e registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2025-10-09 10:55:22.247098 | orchestrator | 717590227f64 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-10-09 10:55:22.247123 | orchestrator | 518def8e9e73 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) glance_api 2025-10-09 10:55:22.247133 | orchestrator | 4a65dacfb574 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-10-09 10:55:22.247144 | orchestrator | 536de42dcf82 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2025-10-09 10:55:22.247153 | orchestrator | 85a1d3748f57 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-10-09 10:55:22.247163 | orchestrator | 77291b91a025 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2025-10-09 10:55:22.247173 | orchestrator | b5fe731a0afe registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-10-09 10:55:22.247182 | orchestrator | d582299029d0 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-10-09 10:55:22.247192 | orchestrator | 0c4bb337d406 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-10-09 10:55:22.247202 | orchestrator | aaba307698c0 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-1 2025-10-09 10:55:22.247214 | orchestrator | 96a895a4aa46 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-10-09 10:55:22.247225 | orchestrator | de364ebecc29 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-10-09 10:55:22.247236 | orchestrator | dd448ab576aa registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-10-09 10:55:22.247247 | orchestrator | 54f02911d5c5 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-10-09 10:55:22.247258 | orchestrator | 4ae70d91d63c registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-10-09 10:55:22.247269 | orchestrator | 7291baf066d8 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-10-09 10:55:22.247281 | orchestrator | a06636add048 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-10-09 10:55:22.247292 | orchestrator | b097adfc9f26 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-1 2025-10-09 10:55:22.247302 | orchestrator | a49833317ce3 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-10-09 10:55:22.247313 | orchestrator | 9d1c578dd1b8 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-10-09 10:55:22.247331 | orchestrator | 2e3764294344 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-10-09 10:55:22.247342 | orchestrator | dfa6af989dd1 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_northd 2025-10-09 10:55:22.247352 | orchestrator | f0706844da17 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db 2025-10-09 10:55:22.247363 | orchestrator | 6c053b67d333 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2025-10-09 10:55:22.247384 | orchestrator | de130ef25a7a registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-10-09 10:55:22.247396 | orchestrator | 2a822fba2af6 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-1 2025-10-09 10:55:22.247411 | orchestrator | 0b72a2afdda9 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2025-10-09 10:55:22.247423 | orchestrator | 9db38b9b082e registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-10-09 10:55:22.247434 | orchestrator | 628b14d06b3c registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2025-10-09 10:55:22.247445 | orchestrator | ccec4c516c12 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2025-10-09 10:55:22.247455 | orchestrator | 486059ad680d registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis 2025-10-09 10:55:22.247466 | orchestrator | c0ea1bb80880 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2025-10-09 10:55:22.247477 | orchestrator | 8ba871f6af57 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-10-09 10:55:22.247488 | orchestrator | cff776b6887b registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-10-09 10:55:22.247499 | orchestrator | 24224109f16b registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-10-09 10:55:22.612175 | orchestrator | 2025-10-09 10:55:22.612277 | orchestrator | ## Images @ testbed-node-1 2025-10-09 10:55:22.612293 | orchestrator | 2025-10-09 10:55:22.612306 | orchestrator | + echo 2025-10-09 10:55:22.612318 | orchestrator | + echo '## Images @ testbed-node-1' 2025-10-09 10:55:22.612330 | orchestrator | + echo 2025-10-09 10:55:22.612342 | orchestrator | + osism container testbed-node-1 images 2025-10-09 10:55:25.131415 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-10-09 10:55:25.131488 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 210330bc6243 8 hours ago 1.27GB 2025-10-09 10:55:25.131501 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 9ad8b3dbdeaf 9 hours ago 1.01GB 2025-10-09 10:55:25.131513 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 f0f4570cb7df 9 hours ago 373MB 2025-10-09 10:55:25.131524 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 825f68120f76 9 hours ago 330MB 2025-10-09 10:55:25.131535 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 5763f1c53827 9 hours ago 274MB 2025-10-09 10:55:25.131563 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 bd79e8fe81c2 9 hours ago 1.52GB 2025-10-09 10:55:25.131575 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 8a60a267233e 9 hours ago 1.54GB 2025-10-09 10:55:25.131585 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 897775bd4fa9 9 hours ago 284MB 2025-10-09 10:55:25.131596 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 43b16b28e14b 9 hours ago 676MB 2025-10-09 10:55:25.131607 | orchestrator | registry.osism.tech/kolla/cron 2024.2 7859e6535a7c 9 hours ago 273MB 2025-10-09 10:55:25.131618 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 bbbe7ef89f66 9 hours ago 282MB 2025-10-09 10:55:25.131629 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 cef0cd0f728f 9 hours ago 586MB 2025-10-09 10:55:25.131639 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 1b3e8c0c1a21 9 hours ago 455MB 2025-10-09 10:55:25.131650 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 24aa4d4d2fe7 9 hours ago 308MB 2025-10-09 10:55:25.131661 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 e0d8690f2704 9 hours ago 313MB 2025-10-09 10:55:25.131671 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 106e41046118 9 hours ago 299MB 2025-10-09 10:55:25.131682 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 aef569508034 9 hours ago 306MB 2025-10-09 10:55:25.131693 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b95198db2d64 9 hours ago 365MB 2025-10-09 10:55:25.131704 | orchestrator | registry.osism.tech/kolla/redis 2024.2 36d3f82e78f4 9 hours ago 280MB 2025-10-09 10:55:25.131715 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 a9570a47c3dd 9 hours ago 280MB 2025-10-09 10:55:25.131726 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 cf77f19c55da 9 hours ago 289MB 2025-10-09 10:55:25.131736 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 d1cc862a21f7 9 hours ago 289MB 2025-10-09 10:55:25.131747 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 98cc9500ccdb 9 hours ago 1.15GB 2025-10-09 10:55:25.131758 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 96d4a0ac10d9 9 hours ago 297MB 2025-10-09 10:55:25.131769 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 3960064f2b5f 9 hours ago 297MB 2025-10-09 10:55:25.131791 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 355c058acc45 9 hours ago 297MB 2025-10-09 10:55:25.131803 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 86f05ce447ed 9 hours ago 297MB 2025-10-09 10:55:25.131813 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 a4a1f788882b 9 hours ago 1.06GB 2025-10-09 10:55:25.131824 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 881785b9a4bd 9 hours ago 1.06GB 2025-10-09 10:55:25.131835 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 92a7d76d345a 9 hours ago 1.04GB 2025-10-09 10:55:25.131846 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 a2fad00a75df 9 hours ago 1.04GB 2025-10-09 10:55:25.131857 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 4408b6d3c9a4 9 hours ago 1.04GB 2025-10-09 10:55:25.131867 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 b464a782f51b 9 hours ago 1.21GB 2025-10-09 10:55:25.131878 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 af9b579d305f 9 hours ago 1.37GB 2025-10-09 10:55:25.131889 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 72b25486e20f 9 hours ago 1.21GB 2025-10-09 10:55:25.131906 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 bf4c81d8ba64 9 hours ago 1.21GB 2025-10-09 10:55:25.131918 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 3a96a9a77bc6 9 hours ago 1.17GB 2025-10-09 10:55:25.131941 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 aa25d2883d52 9 hours ago 984MB 2025-10-09 10:55:25.131953 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 7d6cc28b5b2a 9 hours ago 1.11GB 2025-10-09 10:55:25.131964 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 628d0794a005 9 hours ago 998MB 2025-10-09 10:55:25.131975 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 8f915d64b487 9 hours ago 999MB 2025-10-09 10:55:25.131987 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 9f6d442e498f 9 hours ago 999MB 2025-10-09 10:55:25.132000 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 0e199e24f15d 9 hours ago 1.41GB 2025-10-09 10:55:25.132012 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 50f5ca66f125 9 hours ago 1.41GB 2025-10-09 10:55:25.132058 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 919dea3a2ebf 9 hours ago 1.05GB 2025-10-09 10:55:25.132071 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 297b8c278bd6 9 hours ago 1.09GB 2025-10-09 10:55:25.132083 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 fac16b67eba0 9 hours ago 1.05GB 2025-10-09 10:55:25.132095 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 2f267cd16f3d 9 hours ago 997MB 2025-10-09 10:55:25.132107 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 f2b899e0ad5e 9 hours ago 993MB 2025-10-09 10:55:25.132119 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 172e7a985d3e 9 hours ago 992MB 2025-10-09 10:55:25.132131 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 b6c138c5df69 9 hours ago 993MB 2025-10-09 10:55:25.132143 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 5b1031e006cf 9 hours ago 993MB 2025-10-09 10:55:25.132156 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 678229f2a04f 9 hours ago 997MB 2025-10-09 10:55:25.132168 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 9de753a2ad43 9 hours ago 1.25GB 2025-10-09 10:55:25.132180 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 79b811cd316d 9 hours ago 1.14GB 2025-10-09 10:55:25.519431 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-10-09 10:55:25.520137 | orchestrator | ++ semver latest 5.0.0 2025-10-09 10:55:25.582347 | orchestrator | 2025-10-09 10:55:25.582399 | orchestrator | ## Containers @ testbed-node-2 2025-10-09 10:55:25.582412 | orchestrator | 2025-10-09 10:55:25.582423 | orchestrator | + [[ -1 -eq -1 ]] 2025-10-09 10:55:25.582434 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-10-09 10:55:25.582446 | orchestrator | + echo 2025-10-09 10:55:25.582457 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-10-09 10:55:25.582469 | orchestrator | + echo 2025-10-09 10:55:25.582480 | orchestrator | + osism container testbed-node-2 ps 2025-10-09 10:55:28.084757 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-10-09 10:55:28.084845 | orchestrator | 44ce51e0e59f registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-10-09 10:55:28.084858 | orchestrator | 23b81b687dd6 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 5 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-10-09 10:55:28.084866 | orchestrator | 3e32c2c3d369 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2025-10-09 10:55:28.084892 | orchestrator | 47273630812c registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-10-09 10:55:28.084901 | orchestrator | dd32540a3ae2 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-10-09 10:55:28.084908 | orchestrator | 781f12c3e770 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-10-09 10:55:28.084916 | orchestrator | 3977de2b122b registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-10-09 10:55:28.084923 | orchestrator | b19c962f7099 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-10-09 10:55:28.084930 | orchestrator | 7275891d68ff registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) placement_api 2025-10-09 10:55:28.084951 | orchestrator | def188ebe4b0 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-10-09 10:55:28.084960 | orchestrator | 3e05db0e2b5c registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) designate_worker 2025-10-09 10:55:28.084967 | orchestrator | 83e749c537c7 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2025-10-09 10:55:28.084974 | orchestrator | 5023bece4713 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) nova_conductor 2025-10-09 10:55:28.084981 | orchestrator | 90df10aa28cb registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2025-10-09 10:55:28.084988 | orchestrator | cdd82e7a1207 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2025-10-09 10:55:28.084996 | orchestrator | 4fe1d55823cd registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2025-10-09 10:55:28.085003 | orchestrator | 0ffacdfedbd1 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2025-10-09 10:55:28.085010 | orchestrator | 58e2d7100e7e registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2025-10-09 10:55:28.085046 | orchestrator | 574bd129fc05 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_worker 2025-10-09 10:55:28.085053 | orchestrator | 85b42e01dc44 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_keystone_listener 2025-10-09 10:55:28.085061 | orchestrator | dc4b99981fa3 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) nova_api 2025-10-09 10:55:28.085081 | orchestrator | a917d52df1b0 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2025-10-09 10:55:28.085096 | orchestrator | f496d9fdb4c9 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-10-09 10:55:28.085103 | orchestrator | 8862b49dfe73 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) glance_api 2025-10-09 10:55:28.085111 | orchestrator | f7146fe1af16 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-10-09 10:55:28.085119 | orchestrator | 03293fe5bdfc registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2025-10-09 10:55:28.085126 | orchestrator | 37ad6ce0db81 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-10-09 10:55:28.085134 | orchestrator | 11ded9cb07fb registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2025-10-09 10:55:28.085141 | orchestrator | 709ccd820587 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-10-09 10:55:28.085148 | orchestrator | 1bee2354c4fa registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2025-10-09 10:55:28.085156 | orchestrator | ac90fd1a1071 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-10-09 10:55:28.085532 | orchestrator | 5abc8d8fe03d registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-2 2025-10-09 10:55:28.085551 | orchestrator | cb33bc8d8086 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-10-09 10:55:28.085559 | orchestrator | 5760e68c6378 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-10-09 10:55:28.085566 | orchestrator | 25b15bf1e628 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-10-09 10:55:28.085574 | orchestrator | aa2d1f506c7b registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-10-09 10:55:28.085581 | orchestrator | 81373ef682da registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-10-09 10:55:28.085588 | orchestrator | dd10ee7e236d registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-10-09 10:55:28.085596 | orchestrator | 68d183e6f8f9 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-10-09 10:55:28.085603 | orchestrator | 3f303a075cbd registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-2 2025-10-09 10:55:28.085611 | orchestrator | cc5880a2f8d1 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-10-09 10:55:28.085618 | orchestrator | 3baf8297125e registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes (healthy) proxysql 2025-10-09 10:55:28.085634 | orchestrator | 992c39919bfd registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-10-09 10:55:28.085648 | orchestrator | 563c196d0991 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_northd 2025-10-09 10:55:28.085655 | orchestrator | eb2b66a0b6f3 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db 2025-10-09 10:55:28.085666 | orchestrator | 8615f2d45c12 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_nb_db 2025-10-09 10:55:28.085674 | orchestrator | 644d6ba2780e registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-10-09 10:55:28.085681 | orchestrator | ee28ecce7629 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-2 2025-10-09 10:55:28.085688 | orchestrator | e62ed7d75d26 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2025-10-09 10:55:28.086177 | orchestrator | 5fe66a161ccf registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-10-09 10:55:28.086197 | orchestrator | c3bdca422772 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2025-10-09 10:55:28.086205 | orchestrator | 28cc0e010465 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2025-10-09 10:55:28.086212 | orchestrator | c8a11f891801 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis 2025-10-09 10:55:28.086220 | orchestrator | 2dc59f1d12eb registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2025-10-09 10:55:28.086228 | orchestrator | 66e9fbdcfa94 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-10-09 10:55:28.086235 | orchestrator | fb148680d458 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2025-10-09 10:55:28.086242 | orchestrator | 1f4502c0588e registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-10-09 10:55:28.443327 | orchestrator | 2025-10-09 10:55:28.443426 | orchestrator | ## Images @ testbed-node-2 2025-10-09 10:55:28.443441 | orchestrator | 2025-10-09 10:55:28.443454 | orchestrator | + echo 2025-10-09 10:55:28.443466 | orchestrator | + echo '## Images @ testbed-node-2' 2025-10-09 10:55:28.443478 | orchestrator | + echo 2025-10-09 10:55:28.443490 | orchestrator | + osism container testbed-node-2 images 2025-10-09 10:55:30.947406 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-10-09 10:55:30.947498 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 210330bc6243 8 hours ago 1.27GB 2025-10-09 10:55:30.947508 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 9ad8b3dbdeaf 9 hours ago 1.01GB 2025-10-09 10:55:30.947517 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 f0f4570cb7df 9 hours ago 373MB 2025-10-09 10:55:30.947556 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 825f68120f76 9 hours ago 330MB 2025-10-09 10:55:30.947570 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 5763f1c53827 9 hours ago 274MB 2025-10-09 10:55:30.947582 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 8a60a267233e 9 hours ago 1.54GB 2025-10-09 10:55:30.947595 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 bd79e8fe81c2 9 hours ago 1.52GB 2025-10-09 10:55:30.947608 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 897775bd4fa9 9 hours ago 284MB 2025-10-09 10:55:30.947616 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 43b16b28e14b 9 hours ago 676MB 2025-10-09 10:55:30.947623 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 bbbe7ef89f66 9 hours ago 282MB 2025-10-09 10:55:30.947631 | orchestrator | registry.osism.tech/kolla/cron 2024.2 7859e6535a7c 9 hours ago 273MB 2025-10-09 10:55:30.947638 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 cef0cd0f728f 9 hours ago 586MB 2025-10-09 10:55:30.947645 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 1b3e8c0c1a21 9 hours ago 455MB 2025-10-09 10:55:30.947656 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 24aa4d4d2fe7 9 hours ago 308MB 2025-10-09 10:55:30.947668 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 e0d8690f2704 9 hours ago 313MB 2025-10-09 10:55:30.947684 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 106e41046118 9 hours ago 299MB 2025-10-09 10:55:30.947700 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 aef569508034 9 hours ago 306MB 2025-10-09 10:55:30.947712 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b95198db2d64 9 hours ago 365MB 2025-10-09 10:55:30.947724 | orchestrator | registry.osism.tech/kolla/redis 2024.2 36d3f82e78f4 9 hours ago 280MB 2025-10-09 10:55:30.947735 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 a9570a47c3dd 9 hours ago 280MB 2025-10-09 10:55:30.947747 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 cf77f19c55da 9 hours ago 289MB 2025-10-09 10:55:30.947760 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 d1cc862a21f7 9 hours ago 289MB 2025-10-09 10:55:30.947772 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 98cc9500ccdb 9 hours ago 1.15GB 2025-10-09 10:55:30.947784 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 96d4a0ac10d9 9 hours ago 297MB 2025-10-09 10:55:30.947796 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 3960064f2b5f 9 hours ago 297MB 2025-10-09 10:55:30.947803 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 355c058acc45 9 hours ago 297MB 2025-10-09 10:55:30.947811 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 86f05ce447ed 9 hours ago 297MB 2025-10-09 10:55:30.947823 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 a4a1f788882b 9 hours ago 1.06GB 2025-10-09 10:55:30.947835 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 881785b9a4bd 9 hours ago 1.06GB 2025-10-09 10:55:30.947847 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 92a7d76d345a 9 hours ago 1.04GB 2025-10-09 10:55:30.947859 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 a2fad00a75df 9 hours ago 1.04GB 2025-10-09 10:55:30.947871 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 4408b6d3c9a4 9 hours ago 1.04GB 2025-10-09 10:55:30.947883 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 b464a782f51b 9 hours ago 1.21GB 2025-10-09 10:55:30.947905 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 af9b579d305f 9 hours ago 1.37GB 2025-10-09 10:55:30.947918 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 72b25486e20f 9 hours ago 1.21GB 2025-10-09 10:55:30.947929 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 bf4c81d8ba64 9 hours ago 1.21GB 2025-10-09 10:55:30.947942 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 3a96a9a77bc6 9 hours ago 1.17GB 2025-10-09 10:55:30.947973 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 aa25d2883d52 9 hours ago 984MB 2025-10-09 10:55:30.947986 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 7d6cc28b5b2a 9 hours ago 1.11GB 2025-10-09 10:55:30.948045 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 628d0794a005 9 hours ago 998MB 2025-10-09 10:55:30.948055 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 8f915d64b487 9 hours ago 999MB 2025-10-09 10:55:30.948065 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 9f6d442e498f 9 hours ago 999MB 2025-10-09 10:55:30.948073 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 0e199e24f15d 9 hours ago 1.41GB 2025-10-09 10:55:30.948082 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 50f5ca66f125 9 hours ago 1.41GB 2025-10-09 10:55:30.948090 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 919dea3a2ebf 9 hours ago 1.05GB 2025-10-09 10:55:30.948098 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 297b8c278bd6 9 hours ago 1.09GB 2025-10-09 10:55:30.948106 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 fac16b67eba0 9 hours ago 1.05GB 2025-10-09 10:55:30.948114 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 2f267cd16f3d 9 hours ago 997MB 2025-10-09 10:55:30.948122 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 f2b899e0ad5e 9 hours ago 993MB 2025-10-09 10:55:30.948130 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 172e7a985d3e 9 hours ago 992MB 2025-10-09 10:55:30.948138 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 b6c138c5df69 9 hours ago 993MB 2025-10-09 10:55:30.948146 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 5b1031e006cf 9 hours ago 993MB 2025-10-09 10:55:30.948160 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 678229f2a04f 9 hours ago 997MB 2025-10-09 10:55:30.948172 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 9de753a2ad43 9 hours ago 1.25GB 2025-10-09 10:55:30.948185 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 79b811cd316d 9 hours ago 1.14GB 2025-10-09 10:55:31.436095 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-10-09 10:55:31.445200 | orchestrator | + set -e 2025-10-09 10:55:31.445252 | orchestrator | + source /opt/manager-vars.sh 2025-10-09 10:55:31.446464 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-10-09 10:55:31.446500 | orchestrator | ++ NUMBER_OF_NODES=6 2025-10-09 10:55:31.446512 | orchestrator | ++ export CEPH_VERSION=reef 2025-10-09 10:55:31.446523 | orchestrator | ++ CEPH_VERSION=reef 2025-10-09 10:55:31.446534 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-10-09 10:55:31.446546 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-10-09 10:55:31.446557 | orchestrator | ++ export MANAGER_VERSION=latest 2025-10-09 10:55:31.446574 | orchestrator | ++ MANAGER_VERSION=latest 2025-10-09 10:55:31.446592 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-10-09 10:55:31.446609 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-10-09 10:55:31.446627 | orchestrator | ++ export ARA=false 2025-10-09 10:55:31.446645 | orchestrator | ++ ARA=false 2025-10-09 10:55:31.446662 | orchestrator | ++ export DEPLOY_MODE=manager 2025-10-09 10:55:31.446685 | orchestrator | ++ DEPLOY_MODE=manager 2025-10-09 10:55:31.446702 | orchestrator | ++ export TEMPEST=false 2025-10-09 10:55:31.446722 | orchestrator | ++ TEMPEST=false 2025-10-09 10:55:31.446740 | orchestrator | ++ export IS_ZUUL=true 2025-10-09 10:55:31.446789 | orchestrator | ++ IS_ZUUL=true 2025-10-09 10:55:31.446809 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.25 2025-10-09 10:55:31.446827 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.25 2025-10-09 10:55:31.446850 | orchestrator | ++ export EXTERNAL_API=false 2025-10-09 10:55:31.446874 | orchestrator | ++ EXTERNAL_API=false 2025-10-09 10:55:31.446890 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-10-09 10:55:31.446907 | orchestrator | ++ IMAGE_USER=ubuntu 2025-10-09 10:55:31.446925 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-10-09 10:55:31.446943 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-10-09 10:55:31.446961 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-10-09 10:55:31.446979 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-10-09 10:55:31.446997 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-10-09 10:55:31.447047 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-10-09 10:55:31.454901 | orchestrator | + set -e 2025-10-09 10:55:31.454967 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-10-09 10:55:31.454990 | orchestrator | ++ export INTERACTIVE=false 2025-10-09 10:55:31.455010 | orchestrator | ++ INTERACTIVE=false 2025-10-09 10:55:31.455069 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-10-09 10:55:31.455087 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-10-09 10:55:31.456050 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-10-09 10:55:31.456276 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-10-09 10:55:31.467843 | orchestrator | 2025-10-09 10:55:31.467900 | orchestrator | # Ceph status 2025-10-09 10:55:31.467913 | orchestrator | 2025-10-09 10:55:31.467924 | orchestrator | ++ export MANAGER_VERSION=latest 2025-10-09 10:55:31.467936 | orchestrator | ++ MANAGER_VERSION=latest 2025-10-09 10:55:31.467948 | orchestrator | + echo 2025-10-09 10:55:31.467959 | orchestrator | + echo '# Ceph status' 2025-10-09 10:55:31.467974 | orchestrator | + echo 2025-10-09 10:55:31.467992 | orchestrator | + ceph -s 2025-10-09 10:55:32.106172 | orchestrator | cluster: 2025-10-09 10:55:32.106255 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-10-09 10:55:32.106269 | orchestrator | health: HEALTH_OK 2025-10-09 10:55:32.106279 | orchestrator | 2025-10-09 10:55:32.106287 | orchestrator | services: 2025-10-09 10:55:32.106296 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 29m) 2025-10-09 10:55:32.106307 | orchestrator | mgr: testbed-node-0(active, since 17m), standbys: testbed-node-1, testbed-node-2 2025-10-09 10:55:32.106316 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-10-09 10:55:32.106324 | orchestrator | osd: 6 osds: 6 up (since 25m), 6 in (since 26m) 2025-10-09 10:55:32.106333 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-10-09 10:55:32.106341 | orchestrator | 2025-10-09 10:55:32.106350 | orchestrator | data: 2025-10-09 10:55:32.106358 | orchestrator | volumes: 1/1 healthy 2025-10-09 10:55:32.106366 | orchestrator | pools: 14 pools, 401 pgs 2025-10-09 10:55:32.106374 | orchestrator | objects: 524 objects, 2.2 GiB 2025-10-09 10:55:32.106383 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-10-09 10:55:32.106391 | orchestrator | pgs: 401 active+clean 2025-10-09 10:55:32.106399 | orchestrator | 2025-10-09 10:55:32.164074 | orchestrator | 2025-10-09 10:55:32.164139 | orchestrator | # Ceph versions 2025-10-09 10:55:32.164153 | orchestrator | 2025-10-09 10:55:32.164164 | orchestrator | + echo 2025-10-09 10:55:32.164176 | orchestrator | + echo '# Ceph versions' 2025-10-09 10:55:32.164188 | orchestrator | + echo 2025-10-09 10:55:32.164199 | orchestrator | + ceph versions 2025-10-09 10:55:32.811078 | orchestrator | { 2025-10-09 10:55:32.811160 | orchestrator | "mon": { 2025-10-09 10:55:32.811169 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-10-09 10:55:32.811178 | orchestrator | }, 2025-10-09 10:55:32.811184 | orchestrator | "mgr": { 2025-10-09 10:55:32.811191 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-10-09 10:55:32.811197 | orchestrator | }, 2025-10-09 10:55:32.811203 | orchestrator | "osd": { 2025-10-09 10:55:32.811209 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-10-09 10:55:32.811216 | orchestrator | }, 2025-10-09 10:55:32.811222 | orchestrator | "mds": { 2025-10-09 10:55:32.811228 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-10-09 10:55:32.811234 | orchestrator | }, 2025-10-09 10:55:32.811241 | orchestrator | "rgw": { 2025-10-09 10:55:32.811247 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-10-09 10:55:32.811279 | orchestrator | }, 2025-10-09 10:55:32.811285 | orchestrator | "overall": { 2025-10-09 10:55:32.811292 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-10-09 10:55:32.811299 | orchestrator | } 2025-10-09 10:55:32.811305 | orchestrator | } 2025-10-09 10:55:32.860477 | orchestrator | 2025-10-09 10:55:32.860534 | orchestrator | # Ceph OSD tree 2025-10-09 10:55:32.860540 | orchestrator | 2025-10-09 10:55:32.860544 | orchestrator | + echo 2025-10-09 10:55:32.860548 | orchestrator | + echo '# Ceph OSD tree' 2025-10-09 10:55:32.860553 | orchestrator | + echo 2025-10-09 10:55:32.860557 | orchestrator | + ceph osd df tree 2025-10-09 10:55:33.405941 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-10-09 10:55:33.406147 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-10-09 10:55:33.406164 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-10-09 10:55:33.406176 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.48 1.09 186 up osd.0 2025-10-09 10:55:33.406187 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.36 0.91 202 up osd.4 2025-10-09 10:55:33.406198 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-10-09 10:55:33.406209 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 821 MiB 747 MiB 1 KiB 74 MiB 19 GiB 4.01 0.68 177 up osd.1 2025-10-09 10:55:33.406220 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.6 GiB 1.5 GiB 1 KiB 70 MiB 18 GiB 7.82 1.32 215 up osd.3 2025-10-09 10:55:33.406230 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-10-09 10:55:33.406241 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.94 1.00 196 up osd.2 2025-10-09 10:55:33.406252 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 5.89 0.99 194 up osd.5 2025-10-09 10:55:33.406263 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-10-09 10:55:33.406275 | orchestrator | MIN/MAX VAR: 0.68/1.32 STDDEV: 1.15 2025-10-09 10:55:33.465916 | orchestrator | 2025-10-09 10:55:33.465969 | orchestrator | # Ceph monitor status 2025-10-09 10:55:33.465983 | orchestrator | 2025-10-09 10:55:33.465994 | orchestrator | + echo 2025-10-09 10:55:33.466006 | orchestrator | + echo '# Ceph monitor status' 2025-10-09 10:55:33.466085 | orchestrator | + echo 2025-10-09 10:55:33.466097 | orchestrator | + ceph mon stat 2025-10-09 10:55:34.158887 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-10-09 10:55:34.205931 | orchestrator | 2025-10-09 10:55:34.206093 | orchestrator | # Ceph quorum status 2025-10-09 10:55:34.206114 | orchestrator | 2025-10-09 10:55:34.206126 | orchestrator | + echo 2025-10-09 10:55:34.206138 | orchestrator | + echo '# Ceph quorum status' 2025-10-09 10:55:34.206150 | orchestrator | + echo 2025-10-09 10:55:34.206977 | orchestrator | + ceph quorum_status 2025-10-09 10:55:34.207518 | orchestrator | + jq 2025-10-09 10:55:34.892259 | orchestrator | { 2025-10-09 10:55:34.892354 | orchestrator | "election_epoch": 6, 2025-10-09 10:55:34.892370 | orchestrator | "quorum": [ 2025-10-09 10:55:34.892383 | orchestrator | 0, 2025-10-09 10:55:34.892394 | orchestrator | 1, 2025-10-09 10:55:34.892405 | orchestrator | 2 2025-10-09 10:55:34.892417 | orchestrator | ], 2025-10-09 10:55:34.892428 | orchestrator | "quorum_names": [ 2025-10-09 10:55:34.892439 | orchestrator | "testbed-node-0", 2025-10-09 10:55:34.892450 | orchestrator | "testbed-node-1", 2025-10-09 10:55:34.892461 | orchestrator | "testbed-node-2" 2025-10-09 10:55:34.892472 | orchestrator | ], 2025-10-09 10:55:34.892484 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-10-09 10:55:34.892530 | orchestrator | "quorum_age": 1766, 2025-10-09 10:55:34.892550 | orchestrator | "features": { 2025-10-09 10:55:34.892569 | orchestrator | "quorum_con": "4540138322906710015", 2025-10-09 10:55:34.892587 | orchestrator | "quorum_mon": [ 2025-10-09 10:55:34.892606 | orchestrator | "kraken", 2025-10-09 10:55:34.892624 | orchestrator | "luminous", 2025-10-09 10:55:34.892641 | orchestrator | "mimic", 2025-10-09 10:55:34.892658 | orchestrator | "osdmap-prune", 2025-10-09 10:55:34.892676 | orchestrator | "nautilus", 2025-10-09 10:55:34.892693 | orchestrator | "octopus", 2025-10-09 10:55:34.892712 | orchestrator | "pacific", 2025-10-09 10:55:34.892730 | orchestrator | "elector-pinging", 2025-10-09 10:55:34.892747 | orchestrator | "quincy", 2025-10-09 10:55:34.892767 | orchestrator | "reef" 2025-10-09 10:55:34.892785 | orchestrator | ] 2025-10-09 10:55:34.892805 | orchestrator | }, 2025-10-09 10:55:34.892828 | orchestrator | "monmap": { 2025-10-09 10:55:34.892846 | orchestrator | "epoch": 1, 2025-10-09 10:55:34.892864 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-10-09 10:55:34.892884 | orchestrator | "modified": "2025-10-09T10:25:46.685106Z", 2025-10-09 10:55:34.892904 | orchestrator | "created": "2025-10-09T10:25:46.685106Z", 2025-10-09 10:55:34.892923 | orchestrator | "min_mon_release": 18, 2025-10-09 10:55:34.892942 | orchestrator | "min_mon_release_name": "reef", 2025-10-09 10:55:34.892961 | orchestrator | "election_strategy": 1, 2025-10-09 10:55:34.892978 | orchestrator | "disallowed_leaders: ": "", 2025-10-09 10:55:34.892998 | orchestrator | "stretch_mode": false, 2025-10-09 10:55:34.893045 | orchestrator | "tiebreaker_mon": "", 2025-10-09 10:55:34.893058 | orchestrator | "removed_ranks: ": "", 2025-10-09 10:55:34.893070 | orchestrator | "features": { 2025-10-09 10:55:34.893082 | orchestrator | "persistent": [ 2025-10-09 10:55:34.893093 | orchestrator | "kraken", 2025-10-09 10:55:34.893106 | orchestrator | "luminous", 2025-10-09 10:55:34.893118 | orchestrator | "mimic", 2025-10-09 10:55:34.893130 | orchestrator | "osdmap-prune", 2025-10-09 10:55:34.893142 | orchestrator | "nautilus", 2025-10-09 10:55:34.893153 | orchestrator | "octopus", 2025-10-09 10:55:34.893184 | orchestrator | "pacific", 2025-10-09 10:55:34.893195 | orchestrator | "elector-pinging", 2025-10-09 10:55:34.893206 | orchestrator | "quincy", 2025-10-09 10:55:34.893217 | orchestrator | "reef" 2025-10-09 10:55:34.893228 | orchestrator | ], 2025-10-09 10:55:34.893239 | orchestrator | "optional": [] 2025-10-09 10:55:34.893251 | orchestrator | }, 2025-10-09 10:55:34.893262 | orchestrator | "mons": [ 2025-10-09 10:55:34.893273 | orchestrator | { 2025-10-09 10:55:34.893284 | orchestrator | "rank": 0, 2025-10-09 10:55:34.893295 | orchestrator | "name": "testbed-node-0", 2025-10-09 10:55:34.893306 | orchestrator | "public_addrs": { 2025-10-09 10:55:34.893317 | orchestrator | "addrvec": [ 2025-10-09 10:55:34.893328 | orchestrator | { 2025-10-09 10:55:34.893339 | orchestrator | "type": "v2", 2025-10-09 10:55:34.893350 | orchestrator | "addr": "192.168.16.10:3300", 2025-10-09 10:55:34.893361 | orchestrator | "nonce": 0 2025-10-09 10:55:34.893371 | orchestrator | }, 2025-10-09 10:55:34.893382 | orchestrator | { 2025-10-09 10:55:34.893393 | orchestrator | "type": "v1", 2025-10-09 10:55:34.893404 | orchestrator | "addr": "192.168.16.10:6789", 2025-10-09 10:55:34.893415 | orchestrator | "nonce": 0 2025-10-09 10:55:34.893426 | orchestrator | } 2025-10-09 10:55:34.893437 | orchestrator | ] 2025-10-09 10:55:34.893447 | orchestrator | }, 2025-10-09 10:55:34.893458 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-10-09 10:55:34.893469 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-10-09 10:55:34.893480 | orchestrator | "priority": 0, 2025-10-09 10:55:34.893491 | orchestrator | "weight": 0, 2025-10-09 10:55:34.893502 | orchestrator | "crush_location": "{}" 2025-10-09 10:55:34.893513 | orchestrator | }, 2025-10-09 10:55:34.893523 | orchestrator | { 2025-10-09 10:55:34.893534 | orchestrator | "rank": 1, 2025-10-09 10:55:34.893545 | orchestrator | "name": "testbed-node-1", 2025-10-09 10:55:34.893556 | orchestrator | "public_addrs": { 2025-10-09 10:55:34.893567 | orchestrator | "addrvec": [ 2025-10-09 10:55:34.893578 | orchestrator | { 2025-10-09 10:55:34.893588 | orchestrator | "type": "v2", 2025-10-09 10:55:34.893599 | orchestrator | "addr": "192.168.16.11:3300", 2025-10-09 10:55:34.893610 | orchestrator | "nonce": 0 2025-10-09 10:55:34.893621 | orchestrator | }, 2025-10-09 10:55:34.893632 | orchestrator | { 2025-10-09 10:55:34.893643 | orchestrator | "type": "v1", 2025-10-09 10:55:34.893666 | orchestrator | "addr": "192.168.16.11:6789", 2025-10-09 10:55:34.893677 | orchestrator | "nonce": 0 2025-10-09 10:55:34.893688 | orchestrator | } 2025-10-09 10:55:34.893699 | orchestrator | ] 2025-10-09 10:55:34.893710 | orchestrator | }, 2025-10-09 10:55:34.893721 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-10-09 10:55:34.893732 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-10-09 10:55:34.893742 | orchestrator | "priority": 0, 2025-10-09 10:55:34.893753 | orchestrator | "weight": 0, 2025-10-09 10:55:34.893764 | orchestrator | "crush_location": "{}" 2025-10-09 10:55:34.893775 | orchestrator | }, 2025-10-09 10:55:34.893786 | orchestrator | { 2025-10-09 10:55:34.893796 | orchestrator | "rank": 2, 2025-10-09 10:55:34.893807 | orchestrator | "name": "testbed-node-2", 2025-10-09 10:55:34.893818 | orchestrator | "public_addrs": { 2025-10-09 10:55:34.893829 | orchestrator | "addrvec": [ 2025-10-09 10:55:34.893840 | orchestrator | { 2025-10-09 10:55:34.893851 | orchestrator | "type": "v2", 2025-10-09 10:55:34.893862 | orchestrator | "addr": "192.168.16.12:3300", 2025-10-09 10:55:34.893873 | orchestrator | "nonce": 0 2025-10-09 10:55:34.893883 | orchestrator | }, 2025-10-09 10:55:34.893894 | orchestrator | { 2025-10-09 10:55:34.893905 | orchestrator | "type": "v1", 2025-10-09 10:55:34.893916 | orchestrator | "addr": "192.168.16.12:6789", 2025-10-09 10:55:34.893927 | orchestrator | "nonce": 0 2025-10-09 10:55:34.893938 | orchestrator | } 2025-10-09 10:55:34.893948 | orchestrator | ] 2025-10-09 10:55:34.893959 | orchestrator | }, 2025-10-09 10:55:34.893970 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-10-09 10:55:34.893981 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-10-09 10:55:34.893992 | orchestrator | "priority": 0, 2025-10-09 10:55:34.894002 | orchestrator | "weight": 0, 2025-10-09 10:55:34.894095 | orchestrator | "crush_location": "{}" 2025-10-09 10:55:34.894111 | orchestrator | } 2025-10-09 10:55:34.894122 | orchestrator | ] 2025-10-09 10:55:34.894133 | orchestrator | } 2025-10-09 10:55:34.894144 | orchestrator | } 2025-10-09 10:55:34.894155 | orchestrator | 2025-10-09 10:55:34.894166 | orchestrator | # Ceph free space status 2025-10-09 10:55:34.894177 | orchestrator | 2025-10-09 10:55:34.894188 | orchestrator | + echo 2025-10-09 10:55:34.894199 | orchestrator | + echo '# Ceph free space status' 2025-10-09 10:55:34.894210 | orchestrator | + echo 2025-10-09 10:55:34.894221 | orchestrator | + ceph df 2025-10-09 10:55:35.535158 | orchestrator | --- RAW STORAGE --- 2025-10-09 10:55:35.535254 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-10-09 10:55:35.535279 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-10-09 10:55:35.535292 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-10-09 10:55:35.535304 | orchestrator | 2025-10-09 10:55:35.535316 | orchestrator | --- POOLS --- 2025-10-09 10:55:35.535328 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-10-09 10:55:35.535340 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2025-10-09 10:55:35.535363 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-10-09 10:55:35.535375 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-10-09 10:55:35.535386 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-10-09 10:55:35.535396 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-10-09 10:55:35.535407 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-10-09 10:55:35.535418 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-10-09 10:55:35.535429 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-10-09 10:55:35.535439 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2025-10-09 10:55:35.535450 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-10-09 10:55:35.535461 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-10-09 10:55:35.535472 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.99 35 GiB 2025-10-09 10:55:35.535482 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-10-09 10:55:35.535516 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-10-09 10:55:35.580970 | orchestrator | ++ semver latest 5.0.0 2025-10-09 10:55:35.647838 | orchestrator | + [[ -1 -eq -1 ]] 2025-10-09 10:55:35.647871 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-10-09 10:55:35.647882 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-10-09 10:55:35.647894 | orchestrator | + osism apply facts 2025-10-09 10:55:38.078409 | orchestrator | 2025-10-09 10:55:38 | INFO  | Task 28fbba17-50c4-4b03-b7d9-3054cc9b2f8a (facts) was prepared for execution. 2025-10-09 10:55:38.078506 | orchestrator | 2025-10-09 10:55:38 | INFO  | It takes a moment until task 28fbba17-50c4-4b03-b7d9-3054cc9b2f8a (facts) has been started and output is visible here. 2025-10-09 10:55:52.313487 | orchestrator | 2025-10-09 10:55:52.314382 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-10-09 10:55:52.314423 | orchestrator | 2025-10-09 10:55:52.314436 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-10-09 10:55:52.314448 | orchestrator | Thursday 09 October 2025 10:55:42 +0000 (0:00:00.270) 0:00:00.270 ****** 2025-10-09 10:55:52.314459 | orchestrator | ok: [testbed-manager] 2025-10-09 10:55:52.314472 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:55:52.314483 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:55:52.314494 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:55:52.314519 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:52.314540 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:55:52.314552 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:55:52.314563 | orchestrator | 2025-10-09 10:55:52.314574 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-10-09 10:55:52.314585 | orchestrator | Thursday 09 October 2025 10:55:44 +0000 (0:00:01.694) 0:00:01.965 ****** 2025-10-09 10:55:52.314596 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:55:52.314608 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:55:52.314619 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:55:52.314630 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:55:52.314641 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:55:52.314652 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:55:52.314662 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:55:52.314673 | orchestrator | 2025-10-09 10:55:52.314684 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-10-09 10:55:52.314695 | orchestrator | 2025-10-09 10:55:52.314706 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-10-09 10:55:52.314716 | orchestrator | Thursday 09 October 2025 10:55:45 +0000 (0:00:01.443) 0:00:03.408 ****** 2025-10-09 10:55:52.314728 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:55:52.314739 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:55:52.314750 | orchestrator | ok: [testbed-manager] 2025-10-09 10:55:52.314761 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:55:52.314771 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:55:52.314782 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:55:52.314793 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:55:52.314803 | orchestrator | 2025-10-09 10:55:52.314814 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-10-09 10:55:52.314825 | orchestrator | 2025-10-09 10:55:52.314836 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-10-09 10:55:52.314847 | orchestrator | Thursday 09 October 2025 10:55:51 +0000 (0:00:05.454) 0:00:08.862 ****** 2025-10-09 10:55:52.314858 | orchestrator | skipping: [testbed-manager] 2025-10-09 10:55:52.314869 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:55:52.314880 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:55:52.314891 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:55:52.314902 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:55:52.314913 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:55:52.314923 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:55:52.314934 | orchestrator | 2025-10-09 10:55:52.314945 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:55:52.314985 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:55:52.314998 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:55:52.315035 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:55:52.315047 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:55:52.315059 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:55:52.315069 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:55:52.315080 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:55:52.315091 | orchestrator | 2025-10-09 10:55:52.315101 | orchestrator | 2025-10-09 10:55:52.315112 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:55:52.315123 | orchestrator | Thursday 09 October 2025 10:55:51 +0000 (0:00:00.592) 0:00:09.454 ****** 2025-10-09 10:55:52.315135 | orchestrator | =============================================================================== 2025-10-09 10:55:52.315146 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.45s 2025-10-09 10:55:52.315157 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.69s 2025-10-09 10:55:52.315168 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.44s 2025-10-09 10:55:52.315178 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2025-10-09 10:55:52.682394 | orchestrator | + osism validate ceph-mons 2025-10-09 10:56:26.214777 | orchestrator | 2025-10-09 10:56:26.214888 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-10-09 10:56:26.214903 | orchestrator | 2025-10-09 10:56:26.214913 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-10-09 10:56:26.214924 | orchestrator | Thursday 09 October 2025 10:56:10 +0000 (0:00:00.451) 0:00:00.451 ****** 2025-10-09 10:56:26.214934 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:56:26.214944 | orchestrator | 2025-10-09 10:56:26.214954 | orchestrator | TASK [Create report output directory] ****************************************** 2025-10-09 10:56:26.214963 | orchestrator | Thursday 09 October 2025 10:56:10 +0000 (0:00:00.900) 0:00:01.352 ****** 2025-10-09 10:56:26.214973 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:56:26.214983 | orchestrator | 2025-10-09 10:56:26.214993 | orchestrator | TASK [Define report vars] ****************************************************** 2025-10-09 10:56:26.215031 | orchestrator | Thursday 09 October 2025 10:56:11 +0000 (0:00:01.054) 0:00:02.407 ****** 2025-10-09 10:56:26.215041 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:56:26.215053 | orchestrator | 2025-10-09 10:56:26.215063 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-10-09 10:56:26.215072 | orchestrator | Thursday 09 October 2025 10:56:12 +0000 (0:00:00.141) 0:00:02.548 ****** 2025-10-09 10:56:26.215082 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:56:26.215092 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:56:26.215101 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:56:26.215111 | orchestrator | 2025-10-09 10:56:26.215121 | orchestrator | TASK [Get container info] ****************************************************** 2025-10-09 10:56:26.215130 | orchestrator | Thursday 09 October 2025 10:56:12 +0000 (0:00:00.318) 0:00:02.867 ****** 2025-10-09 10:56:26.215140 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:56:26.215150 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:56:26.215230 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:56:26.215241 | orchestrator | 2025-10-09 10:56:26.215251 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-10-09 10:56:26.215261 | orchestrator | Thursday 09 October 2025 10:56:13 +0000 (0:00:01.116) 0:00:03.983 ****** 2025-10-09 10:56:26.215271 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:56:26.215282 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:56:26.215291 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:56:26.215301 | orchestrator | 2025-10-09 10:56:26.215311 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-10-09 10:56:26.215320 | orchestrator | Thursday 09 October 2025 10:56:13 +0000 (0:00:00.298) 0:00:04.281 ****** 2025-10-09 10:56:26.215330 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:56:26.215340 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:56:26.215349 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:56:26.215359 | orchestrator | 2025-10-09 10:56:26.215368 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-10-09 10:56:26.215378 | orchestrator | Thursday 09 October 2025 10:56:14 +0000 (0:00:00.520) 0:00:04.802 ****** 2025-10-09 10:56:26.215388 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:56:26.215398 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:56:26.215407 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:56:26.215417 | orchestrator | 2025-10-09 10:56:26.215427 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-10-09 10:56:26.215437 | orchestrator | Thursday 09 October 2025 10:56:14 +0000 (0:00:00.313) 0:00:05.116 ****** 2025-10-09 10:56:26.215446 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:56:26.215456 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:56:26.215466 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:56:26.215476 | orchestrator | 2025-10-09 10:56:26.215486 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-10-09 10:56:26.215496 | orchestrator | Thursday 09 October 2025 10:56:15 +0000 (0:00:00.317) 0:00:05.433 ****** 2025-10-09 10:56:26.215505 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:56:26.215515 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:56:26.215524 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:56:26.215534 | orchestrator | 2025-10-09 10:56:26.215544 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-10-09 10:56:26.215553 | orchestrator | Thursday 09 October 2025 10:56:15 +0000 (0:00:00.514) 0:00:05.948 ****** 2025-10-09 10:56:26.215563 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:56:26.215572 | orchestrator | 2025-10-09 10:56:26.215582 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-10-09 10:56:26.215592 | orchestrator | Thursday 09 October 2025 10:56:15 +0000 (0:00:00.283) 0:00:06.231 ****** 2025-10-09 10:56:26.215601 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:56:26.215611 | orchestrator | 2025-10-09 10:56:26.215621 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-10-09 10:56:26.215634 | orchestrator | Thursday 09 October 2025 10:56:16 +0000 (0:00:00.270) 0:00:06.501 ****** 2025-10-09 10:56:26.215644 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:56:26.215654 | orchestrator | 2025-10-09 10:56:26.215663 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:56:26.215673 | orchestrator | Thursday 09 October 2025 10:56:16 +0000 (0:00:00.240) 0:00:06.742 ****** 2025-10-09 10:56:26.215683 | orchestrator | 2025-10-09 10:56:26.215692 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:56:26.215702 | orchestrator | Thursday 09 October 2025 10:56:16 +0000 (0:00:00.073) 0:00:06.815 ****** 2025-10-09 10:56:26.215712 | orchestrator | 2025-10-09 10:56:26.215721 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:56:26.215731 | orchestrator | Thursday 09 October 2025 10:56:16 +0000 (0:00:00.087) 0:00:06.903 ****** 2025-10-09 10:56:26.215740 | orchestrator | 2025-10-09 10:56:26.215751 | orchestrator | TASK [Print report file information] ******************************************* 2025-10-09 10:56:26.215766 | orchestrator | Thursday 09 October 2025 10:56:16 +0000 (0:00:00.073) 0:00:06.977 ****** 2025-10-09 10:56:26.215776 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:56:26.215786 | orchestrator | 2025-10-09 10:56:26.215796 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-10-09 10:56:26.215805 | orchestrator | Thursday 09 October 2025 10:56:16 +0000 (0:00:00.250) 0:00:07.227 ****** 2025-10-09 10:56:26.215815 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:56:26.215825 | orchestrator | 2025-10-09 10:56:26.215850 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-10-09 10:56:26.215860 | orchestrator | Thursday 09 October 2025 10:56:17 +0000 (0:00:00.273) 0:00:07.500 ****** 2025-10-09 10:56:26.215870 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:56:26.215880 | orchestrator | 2025-10-09 10:56:26.215889 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-10-09 10:56:26.215899 | orchestrator | Thursday 09 October 2025 10:56:17 +0000 (0:00:00.129) 0:00:07.630 ****** 2025-10-09 10:56:26.215909 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:56:26.215918 | orchestrator | 2025-10-09 10:56:26.215928 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-10-09 10:56:26.215938 | orchestrator | Thursday 09 October 2025 10:56:18 +0000 (0:00:01.613) 0:00:09.244 ****** 2025-10-09 10:56:26.215947 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:56:26.215957 | orchestrator | 2025-10-09 10:56:26.215967 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-10-09 10:56:26.215976 | orchestrator | Thursday 09 October 2025 10:56:19 +0000 (0:00:00.538) 0:00:09.782 ****** 2025-10-09 10:56:26.215986 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:56:26.215995 | orchestrator | 2025-10-09 10:56:26.216021 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-10-09 10:56:26.216031 | orchestrator | Thursday 09 October 2025 10:56:19 +0000 (0:00:00.129) 0:00:09.912 ****** 2025-10-09 10:56:26.216040 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:56:26.216050 | orchestrator | 2025-10-09 10:56:26.216059 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-10-09 10:56:26.216069 | orchestrator | Thursday 09 October 2025 10:56:19 +0000 (0:00:00.346) 0:00:10.258 ****** 2025-10-09 10:56:26.216078 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:56:26.216088 | orchestrator | 2025-10-09 10:56:26.216097 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-10-09 10:56:26.216106 | orchestrator | Thursday 09 October 2025 10:56:20 +0000 (0:00:00.336) 0:00:10.595 ****** 2025-10-09 10:56:26.216116 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:56:26.216125 | orchestrator | 2025-10-09 10:56:26.216135 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-10-09 10:56:26.216144 | orchestrator | Thursday 09 October 2025 10:56:20 +0000 (0:00:00.137) 0:00:10.732 ****** 2025-10-09 10:56:26.216154 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:56:26.216163 | orchestrator | 2025-10-09 10:56:26.216173 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-10-09 10:56:26.216182 | orchestrator | Thursday 09 October 2025 10:56:20 +0000 (0:00:00.143) 0:00:10.876 ****** 2025-10-09 10:56:26.216191 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:56:26.216201 | orchestrator | 2025-10-09 10:56:26.216210 | orchestrator | TASK [Gather status data] ****************************************************** 2025-10-09 10:56:26.216220 | orchestrator | Thursday 09 October 2025 10:56:20 +0000 (0:00:00.111) 0:00:10.987 ****** 2025-10-09 10:56:26.216229 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:56:26.216239 | orchestrator | 2025-10-09 10:56:26.216248 | orchestrator | TASK [Set health test data] **************************************************** 2025-10-09 10:56:26.216258 | orchestrator | Thursday 09 October 2025 10:56:21 +0000 (0:00:01.388) 0:00:12.376 ****** 2025-10-09 10:56:26.216267 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:56:26.216277 | orchestrator | 2025-10-09 10:56:26.216286 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-10-09 10:56:26.216301 | orchestrator | Thursday 09 October 2025 10:56:22 +0000 (0:00:00.312) 0:00:12.689 ****** 2025-10-09 10:56:26.216311 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:56:26.216320 | orchestrator | 2025-10-09 10:56:26.216329 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-10-09 10:56:26.216339 | orchestrator | Thursday 09 October 2025 10:56:22 +0000 (0:00:00.141) 0:00:12.830 ****** 2025-10-09 10:56:26.216348 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:56:26.216358 | orchestrator | 2025-10-09 10:56:26.216367 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-10-09 10:56:26.216376 | orchestrator | Thursday 09 October 2025 10:56:22 +0000 (0:00:00.165) 0:00:12.995 ****** 2025-10-09 10:56:26.216386 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:56:26.216395 | orchestrator | 2025-10-09 10:56:26.216405 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-10-09 10:56:26.216414 | orchestrator | Thursday 09 October 2025 10:56:22 +0000 (0:00:00.147) 0:00:13.142 ****** 2025-10-09 10:56:26.216424 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:56:26.216433 | orchestrator | 2025-10-09 10:56:26.216443 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-10-09 10:56:26.216452 | orchestrator | Thursday 09 October 2025 10:56:23 +0000 (0:00:00.338) 0:00:13.481 ****** 2025-10-09 10:56:26.216461 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:56:26.216471 | orchestrator | 2025-10-09 10:56:26.216481 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-10-09 10:56:26.216490 | orchestrator | Thursday 09 October 2025 10:56:23 +0000 (0:00:00.264) 0:00:13.745 ****** 2025-10-09 10:56:26.216499 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:56:26.216509 | orchestrator | 2025-10-09 10:56:26.216518 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-10-09 10:56:26.216528 | orchestrator | Thursday 09 October 2025 10:56:23 +0000 (0:00:00.266) 0:00:14.012 ****** 2025-10-09 10:56:26.216537 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:56:26.216547 | orchestrator | 2025-10-09 10:56:26.216557 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-10-09 10:56:26.216566 | orchestrator | Thursday 09 October 2025 10:56:25 +0000 (0:00:01.790) 0:00:15.803 ****** 2025-10-09 10:56:26.216576 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:56:26.216585 | orchestrator | 2025-10-09 10:56:26.216595 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-10-09 10:56:26.216604 | orchestrator | Thursday 09 October 2025 10:56:25 +0000 (0:00:00.304) 0:00:16.107 ****** 2025-10-09 10:56:26.216614 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:56:26.216623 | orchestrator | 2025-10-09 10:56:26.216638 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:56:29.110969 | orchestrator | Thursday 09 October 2025 10:56:25 +0000 (0:00:00.284) 0:00:16.391 ****** 2025-10-09 10:56:29.111137 | orchestrator | 2025-10-09 10:56:29.111154 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:56:29.111167 | orchestrator | Thursday 09 October 2025 10:56:26 +0000 (0:00:00.079) 0:00:16.471 ****** 2025-10-09 10:56:29.111178 | orchestrator | 2025-10-09 10:56:29.111190 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:56:29.111201 | orchestrator | Thursday 09 October 2025 10:56:26 +0000 (0:00:00.081) 0:00:16.553 ****** 2025-10-09 10:56:29.111212 | orchestrator | 2025-10-09 10:56:29.111223 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-10-09 10:56:29.111234 | orchestrator | Thursday 09 October 2025 10:56:26 +0000 (0:00:00.076) 0:00:16.630 ****** 2025-10-09 10:56:29.111246 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:56:29.111257 | orchestrator | 2025-10-09 10:56:29.111268 | orchestrator | TASK [Print report file information] ******************************************* 2025-10-09 10:56:29.111314 | orchestrator | Thursday 09 October 2025 10:56:27 +0000 (0:00:01.596) 0:00:18.226 ****** 2025-10-09 10:56:29.111326 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-10-09 10:56:29.111337 | orchestrator |  "msg": [ 2025-10-09 10:56:29.111350 | orchestrator |  "Validator run completed.", 2025-10-09 10:56:29.111361 | orchestrator |  "You can find the report file here:", 2025-10-09 10:56:29.111372 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-10-09T10:56:10+00:00-report.json", 2025-10-09 10:56:29.111385 | orchestrator |  "on the following host:", 2025-10-09 10:56:29.111396 | orchestrator |  "testbed-manager" 2025-10-09 10:56:29.111407 | orchestrator |  ] 2025-10-09 10:56:29.111419 | orchestrator | } 2025-10-09 10:56:29.111430 | orchestrator | 2025-10-09 10:56:29.111441 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:56:29.111454 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-10-09 10:56:29.111467 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:56:29.111478 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:56:29.111489 | orchestrator | 2025-10-09 10:56:29.111500 | orchestrator | 2025-10-09 10:56:29.111512 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:56:29.111526 | orchestrator | Thursday 09 October 2025 10:56:28 +0000 (0:00:00.914) 0:00:19.140 ****** 2025-10-09 10:56:29.111538 | orchestrator | =============================================================================== 2025-10-09 10:56:29.111551 | orchestrator | Aggregate test results step one ----------------------------------------- 1.79s 2025-10-09 10:56:29.111563 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.61s 2025-10-09 10:56:29.111575 | orchestrator | Write report file ------------------------------------------------------- 1.60s 2025-10-09 10:56:29.111587 | orchestrator | Gather status data ------------------------------------------------------ 1.39s 2025-10-09 10:56:29.111599 | orchestrator | Get container info ------------------------------------------------------ 1.12s 2025-10-09 10:56:29.111611 | orchestrator | Create report output directory ------------------------------------------ 1.05s 2025-10-09 10:56:29.111624 | orchestrator | Print report file information ------------------------------------------- 0.91s 2025-10-09 10:56:29.111652 | orchestrator | Get timestamp for report file ------------------------------------------- 0.90s 2025-10-09 10:56:29.111665 | orchestrator | Set quorum test data ---------------------------------------------------- 0.54s 2025-10-09 10:56:29.111677 | orchestrator | Set test result to passed if container is existing ---------------------- 0.52s 2025-10-09 10:56:29.111689 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.51s 2025-10-09 10:56:29.111702 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.35s 2025-10-09 10:56:29.111714 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.34s 2025-10-09 10:56:29.111732 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.34s 2025-10-09 10:56:29.111745 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2025-10-09 10:56:29.111758 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.32s 2025-10-09 10:56:29.111770 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-10-09 10:56:29.111783 | orchestrator | Set health test data ---------------------------------------------------- 0.31s 2025-10-09 10:56:29.111795 | orchestrator | Aggregate test results step two ----------------------------------------- 0.30s 2025-10-09 10:56:29.111808 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2025-10-09 10:56:29.490767 | orchestrator | + osism validate ceph-mgrs 2025-10-09 10:57:02.036389 | orchestrator | 2025-10-09 10:57:02.036515 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-10-09 10:57:02.036532 | orchestrator | 2025-10-09 10:57:02.036545 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-10-09 10:57:02.036557 | orchestrator | Thursday 09 October 2025 10:56:46 +0000 (0:00:00.475) 0:00:00.475 ****** 2025-10-09 10:57:02.036569 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:57:02.036580 | orchestrator | 2025-10-09 10:57:02.036592 | orchestrator | TASK [Create report output directory] ****************************************** 2025-10-09 10:57:02.036603 | orchestrator | Thursday 09 October 2025 10:56:47 +0000 (0:00:00.979) 0:00:01.455 ****** 2025-10-09 10:57:02.036614 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:57:02.036625 | orchestrator | 2025-10-09 10:57:02.036636 | orchestrator | TASK [Define report vars] ****************************************************** 2025-10-09 10:57:02.036647 | orchestrator | Thursday 09 October 2025 10:56:48 +0000 (0:00:01.033) 0:00:02.488 ****** 2025-10-09 10:57:02.036659 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:57:02.036671 | orchestrator | 2025-10-09 10:57:02.036683 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-10-09 10:57:02.036694 | orchestrator | Thursday 09 October 2025 10:56:48 +0000 (0:00:00.135) 0:00:02.623 ****** 2025-10-09 10:57:02.036705 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:57:02.036716 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:57:02.036727 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:57:02.036738 | orchestrator | 2025-10-09 10:57:02.036749 | orchestrator | TASK [Get container info] ****************************************************** 2025-10-09 10:57:02.036760 | orchestrator | Thursday 09 October 2025 10:56:49 +0000 (0:00:00.319) 0:00:02.942 ****** 2025-10-09 10:57:02.036771 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:57:02.036782 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:57:02.036794 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:57:02.036805 | orchestrator | 2025-10-09 10:57:02.036816 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-10-09 10:57:02.036827 | orchestrator | Thursday 09 October 2025 10:56:50 +0000 (0:00:01.018) 0:00:03.961 ****** 2025-10-09 10:57:02.036838 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:57:02.036850 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:57:02.036861 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:57:02.036872 | orchestrator | 2025-10-09 10:57:02.036883 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-10-09 10:57:02.036894 | orchestrator | Thursday 09 October 2025 10:56:50 +0000 (0:00:00.401) 0:00:04.362 ****** 2025-10-09 10:57:02.036905 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:57:02.036919 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:57:02.036933 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:57:02.036945 | orchestrator | 2025-10-09 10:57:02.036958 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-10-09 10:57:02.036971 | orchestrator | Thursday 09 October 2025 10:56:51 +0000 (0:00:00.511) 0:00:04.874 ****** 2025-10-09 10:57:02.036983 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:57:02.036996 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:57:02.037031 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:57:02.037044 | orchestrator | 2025-10-09 10:57:02.037056 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-10-09 10:57:02.037069 | orchestrator | Thursday 09 October 2025 10:56:51 +0000 (0:00:00.300) 0:00:05.174 ****** 2025-10-09 10:57:02.037082 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:57:02.037094 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:57:02.037107 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:57:02.037120 | orchestrator | 2025-10-09 10:57:02.037133 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-10-09 10:57:02.037146 | orchestrator | Thursday 09 October 2025 10:56:51 +0000 (0:00:00.287) 0:00:05.461 ****** 2025-10-09 10:57:02.037159 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:57:02.037195 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:57:02.037207 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:57:02.037221 | orchestrator | 2025-10-09 10:57:02.037233 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-10-09 10:57:02.037246 | orchestrator | Thursday 09 October 2025 10:56:52 +0000 (0:00:00.517) 0:00:05.979 ****** 2025-10-09 10:57:02.037259 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:57:02.037271 | orchestrator | 2025-10-09 10:57:02.037283 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-10-09 10:57:02.037294 | orchestrator | Thursday 09 October 2025 10:56:52 +0000 (0:00:00.263) 0:00:06.243 ****** 2025-10-09 10:57:02.037305 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:57:02.037316 | orchestrator | 2025-10-09 10:57:02.037327 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-10-09 10:57:02.037338 | orchestrator | Thursday 09 October 2025 10:56:52 +0000 (0:00:00.251) 0:00:06.494 ****** 2025-10-09 10:57:02.037349 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:57:02.037360 | orchestrator | 2025-10-09 10:57:02.037371 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:57:02.037382 | orchestrator | Thursday 09 October 2025 10:56:52 +0000 (0:00:00.254) 0:00:06.749 ****** 2025-10-09 10:57:02.037393 | orchestrator | 2025-10-09 10:57:02.037404 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:57:02.037415 | orchestrator | Thursday 09 October 2025 10:56:53 +0000 (0:00:00.095) 0:00:06.844 ****** 2025-10-09 10:57:02.037426 | orchestrator | 2025-10-09 10:57:02.037451 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:57:02.037462 | orchestrator | Thursday 09 October 2025 10:56:53 +0000 (0:00:00.104) 0:00:06.948 ****** 2025-10-09 10:57:02.037474 | orchestrator | 2025-10-09 10:57:02.037484 | orchestrator | TASK [Print report file information] ******************************************* 2025-10-09 10:57:02.037495 | orchestrator | Thursday 09 October 2025 10:56:53 +0000 (0:00:00.073) 0:00:07.022 ****** 2025-10-09 10:57:02.037507 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:57:02.037517 | orchestrator | 2025-10-09 10:57:02.037528 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-10-09 10:57:02.037540 | orchestrator | Thursday 09 October 2025 10:56:53 +0000 (0:00:00.266) 0:00:07.288 ****** 2025-10-09 10:57:02.037551 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:57:02.037562 | orchestrator | 2025-10-09 10:57:02.037591 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-10-09 10:57:02.037603 | orchestrator | Thursday 09 October 2025 10:56:53 +0000 (0:00:00.248) 0:00:07.537 ****** 2025-10-09 10:57:02.037614 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:57:02.037625 | orchestrator | 2025-10-09 10:57:02.037636 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-10-09 10:57:02.037647 | orchestrator | Thursday 09 October 2025 10:56:53 +0000 (0:00:00.127) 0:00:07.664 ****** 2025-10-09 10:57:02.037658 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:57:02.037669 | orchestrator | 2025-10-09 10:57:02.037679 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-10-09 10:57:02.037690 | orchestrator | Thursday 09 October 2025 10:56:55 +0000 (0:00:02.042) 0:00:09.707 ****** 2025-10-09 10:57:02.037701 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:57:02.037712 | orchestrator | 2025-10-09 10:57:02.037723 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-10-09 10:57:02.037734 | orchestrator | Thursday 09 October 2025 10:56:56 +0000 (0:00:00.459) 0:00:10.167 ****** 2025-10-09 10:57:02.037744 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:57:02.037755 | orchestrator | 2025-10-09 10:57:02.037766 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-10-09 10:57:02.037777 | orchestrator | Thursday 09 October 2025 10:56:56 +0000 (0:00:00.335) 0:00:10.502 ****** 2025-10-09 10:57:02.037788 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:57:02.037798 | orchestrator | 2025-10-09 10:57:02.037809 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-10-09 10:57:02.037829 | orchestrator | Thursday 09 October 2025 10:56:56 +0000 (0:00:00.171) 0:00:10.674 ****** 2025-10-09 10:57:02.037841 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:57:02.037852 | orchestrator | 2025-10-09 10:57:02.037862 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-10-09 10:57:02.037873 | orchestrator | Thursday 09 October 2025 10:56:57 +0000 (0:00:00.173) 0:00:10.847 ****** 2025-10-09 10:57:02.037884 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:57:02.037895 | orchestrator | 2025-10-09 10:57:02.037906 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-10-09 10:57:02.037917 | orchestrator | Thursday 09 October 2025 10:56:57 +0000 (0:00:00.292) 0:00:11.140 ****** 2025-10-09 10:57:02.037928 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:57:02.037938 | orchestrator | 2025-10-09 10:57:02.037949 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-10-09 10:57:02.037960 | orchestrator | Thursday 09 October 2025 10:56:57 +0000 (0:00:00.283) 0:00:11.424 ****** 2025-10-09 10:57:02.037971 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:57:02.037982 | orchestrator | 2025-10-09 10:57:02.037993 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-10-09 10:57:02.038072 | orchestrator | Thursday 09 October 2025 10:56:59 +0000 (0:00:01.483) 0:00:12.907 ****** 2025-10-09 10:57:02.038084 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:57:02.038095 | orchestrator | 2025-10-09 10:57:02.038106 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-10-09 10:57:02.038117 | orchestrator | Thursday 09 October 2025 10:56:59 +0000 (0:00:00.308) 0:00:13.216 ****** 2025-10-09 10:57:02.038128 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:57:02.038139 | orchestrator | 2025-10-09 10:57:02.038150 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:57:02.038161 | orchestrator | Thursday 09 October 2025 10:56:59 +0000 (0:00:00.262) 0:00:13.478 ****** 2025-10-09 10:57:02.038171 | orchestrator | 2025-10-09 10:57:02.038182 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:57:02.038193 | orchestrator | Thursday 09 October 2025 10:56:59 +0000 (0:00:00.070) 0:00:13.549 ****** 2025-10-09 10:57:02.038204 | orchestrator | 2025-10-09 10:57:02.038215 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:57:02.038226 | orchestrator | Thursday 09 October 2025 10:56:59 +0000 (0:00:00.068) 0:00:13.617 ****** 2025-10-09 10:57:02.038237 | orchestrator | 2025-10-09 10:57:02.038247 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-10-09 10:57:02.038259 | orchestrator | Thursday 09 October 2025 10:57:00 +0000 (0:00:00.272) 0:00:13.890 ****** 2025-10-09 10:57:02.038270 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-10-09 10:57:02.038280 | orchestrator | 2025-10-09 10:57:02.038291 | orchestrator | TASK [Print report file information] ******************************************* 2025-10-09 10:57:02.038302 | orchestrator | Thursday 09 October 2025 10:57:01 +0000 (0:00:01.478) 0:00:15.368 ****** 2025-10-09 10:57:02.038313 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-10-09 10:57:02.038324 | orchestrator |  "msg": [ 2025-10-09 10:57:02.038336 | orchestrator |  "Validator run completed.", 2025-10-09 10:57:02.038347 | orchestrator |  "You can find the report file here:", 2025-10-09 10:57:02.038359 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-10-09T10:56:47+00:00-report.json", 2025-10-09 10:57:02.038370 | orchestrator |  "on the following host:", 2025-10-09 10:57:02.038381 | orchestrator |  "testbed-manager" 2025-10-09 10:57:02.038393 | orchestrator |  ] 2025-10-09 10:57:02.038405 | orchestrator | } 2025-10-09 10:57:02.038416 | orchestrator | 2025-10-09 10:57:02.038427 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:57:02.038447 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-10-09 10:57:02.038459 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:57:02.038479 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:57:02.401709 | orchestrator | 2025-10-09 10:57:02.401812 | orchestrator | 2025-10-09 10:57:02.401829 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:57:02.401843 | orchestrator | Thursday 09 October 2025 10:57:02 +0000 (0:00:00.442) 0:00:15.811 ****** 2025-10-09 10:57:02.401855 | orchestrator | =============================================================================== 2025-10-09 10:57:02.401866 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.04s 2025-10-09 10:57:02.401877 | orchestrator | Aggregate test results step one ----------------------------------------- 1.48s 2025-10-09 10:57:02.401888 | orchestrator | Write report file ------------------------------------------------------- 1.48s 2025-10-09 10:57:02.401899 | orchestrator | Create report output directory ------------------------------------------ 1.03s 2025-10-09 10:57:02.401909 | orchestrator | Get container info ------------------------------------------------------ 1.02s 2025-10-09 10:57:02.401920 | orchestrator | Get timestamp for report file ------------------------------------------- 0.98s 2025-10-09 10:57:02.401931 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.52s 2025-10-09 10:57:02.401942 | orchestrator | Set test result to passed if container is existing ---------------------- 0.51s 2025-10-09 10:57:02.401952 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.46s 2025-10-09 10:57:02.401963 | orchestrator | Print report file information ------------------------------------------- 0.44s 2025-10-09 10:57:02.401974 | orchestrator | Flush handlers ---------------------------------------------------------- 0.41s 2025-10-09 10:57:02.401985 | orchestrator | Set test result to failed if container is missing ----------------------- 0.40s 2025-10-09 10:57:02.401995 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.34s 2025-10-09 10:57:02.402103 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2025-10-09 10:57:02.402115 | orchestrator | Aggregate test results step two ----------------------------------------- 0.31s 2025-10-09 10:57:02.402126 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2025-10-09 10:57:02.402137 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.29s 2025-10-09 10:57:02.402170 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.29s 2025-10-09 10:57:02.402182 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.28s 2025-10-09 10:57:02.402193 | orchestrator | Flush handlers ---------------------------------------------------------- 0.27s 2025-10-09 10:57:02.739355 | orchestrator | + osism validate ceph-osds 2025-10-09 10:57:25.584203 | orchestrator | 2025-10-09 10:57:25.584315 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-10-09 10:57:25.584332 | orchestrator | 2025-10-09 10:57:25.584345 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-10-09 10:57:25.584357 | orchestrator | Thursday 09 October 2025 10:57:20 +0000 (0:00:00.504) 0:00:00.504 ****** 2025-10-09 10:57:25.584369 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:57:25.584380 | orchestrator | 2025-10-09 10:57:25.584391 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-10-09 10:57:25.584403 | orchestrator | Thursday 09 October 2025 10:57:21 +0000 (0:00:00.858) 0:00:01.363 ****** 2025-10-09 10:57:25.584414 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:57:25.584425 | orchestrator | 2025-10-09 10:57:25.584435 | orchestrator | TASK [Create report output directory] ****************************************** 2025-10-09 10:57:25.584469 | orchestrator | Thursday 09 October 2025 10:57:21 +0000 (0:00:00.537) 0:00:01.900 ****** 2025-10-09 10:57:25.584480 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:57:25.584491 | orchestrator | 2025-10-09 10:57:25.584502 | orchestrator | TASK [Define report vars] ****************************************************** 2025-10-09 10:57:25.584512 | orchestrator | Thursday 09 October 2025 10:57:22 +0000 (0:00:01.098) 0:00:02.998 ****** 2025-10-09 10:57:25.584523 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:25.584535 | orchestrator | 2025-10-09 10:57:25.584546 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-10-09 10:57:25.584557 | orchestrator | Thursday 09 October 2025 10:57:22 +0000 (0:00:00.142) 0:00:03.141 ****** 2025-10-09 10:57:25.584567 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:57:25.584578 | orchestrator | 2025-10-09 10:57:25.584589 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-10-09 10:57:25.584600 | orchestrator | Thursday 09 October 2025 10:57:23 +0000 (0:00:00.144) 0:00:03.285 ****** 2025-10-09 10:57:25.584611 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:57:25.584622 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:57:25.584632 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:57:25.584643 | orchestrator | 2025-10-09 10:57:25.584653 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-10-09 10:57:25.584664 | orchestrator | Thursday 09 October 2025 10:57:23 +0000 (0:00:00.348) 0:00:03.634 ****** 2025-10-09 10:57:25.584675 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:25.584685 | orchestrator | 2025-10-09 10:57:25.584711 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-10-09 10:57:25.584722 | orchestrator | Thursday 09 October 2025 10:57:23 +0000 (0:00:00.174) 0:00:03.808 ****** 2025-10-09 10:57:25.584735 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:25.584747 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:57:25.584759 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:57:25.584771 | orchestrator | 2025-10-09 10:57:25.584783 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-10-09 10:57:25.584795 | orchestrator | Thursday 09 October 2025 10:57:23 +0000 (0:00:00.332) 0:00:04.141 ****** 2025-10-09 10:57:25.584807 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:25.584818 | orchestrator | 2025-10-09 10:57:25.584830 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-10-09 10:57:25.584842 | orchestrator | Thursday 09 October 2025 10:57:24 +0000 (0:00:00.622) 0:00:04.763 ****** 2025-10-09 10:57:25.584854 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:25.584866 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:57:25.584878 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:57:25.584889 | orchestrator | 2025-10-09 10:57:25.584901 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-10-09 10:57:25.584913 | orchestrator | Thursday 09 October 2025 10:57:25 +0000 (0:00:00.656) 0:00:05.420 ****** 2025-10-09 10:57:25.584928 | orchestrator | skipping: [testbed-node-3] => (item={'id': '64d6486b6bb4a62e8c6e484e11e57fc992b997d8169423c0d4e3129010b2ac57', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-10-09 10:57:25.584944 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b58bb80a9d48df42f39621ec1b6c091fa0e40f3f799d589a85f6fb15bd6d9bab', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-10-09 10:57:25.584956 | orchestrator | skipping: [testbed-node-3] => (item={'id': '376af88b2c59d61c3b814ec6c0a08dd4c6a43c0aaf803ff3abe7761b6942f689', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-10-09 10:57:25.584970 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ded8300bf4c7609f557a557f759f87605ddfa37c8d9966a28038f4533816afe3', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-10-09 10:57:25.585024 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6ee1a478c0eeaa0ca935a4d2f3da3e693354bc4850edc0759f1cae14d5ba6b2c', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-10-09 10:57:25.585056 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9adb5cd660c48844826430eaddc1353c9391f17e823fff9c14ec0bdb22db6f74', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-10-09 10:57:25.585070 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'dbd0299b053d33c79eb56daa1e178d94017318a4e202ad7b2d17759091b0c64e', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-10-09 10:57:25.585083 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd4777b3094eaef9e7b6fac1d7ec247fe52dbfecda119718745ab2e124ed5a750', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2025-10-09 10:57:25.585096 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2875a2aec9b5231788f5ddb284104751581a6f1a7be11e03006c73b0dbb906a1', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-10-09 10:57:25.585112 | orchestrator | skipping: [testbed-node-3] => (item={'id': '09d13b5a0a4192c01de735ea7a7a53802046b7c43300eddb77222ca3d6738ef6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 23 minutes'})  2025-10-09 10:57:25.585123 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'dc1bf2c5edd09396642852642dfec754ad9c9c1dac4ac49c5b20be8495eb8c6c', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-10-09 10:57:25.585134 | orchestrator | skipping: [testbed-node-3] => (item={'id': '53deccfe33d257c5bb825ce263f65a670891852e24846880cb3513130f583be4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 25 minutes'})  2025-10-09 10:57:25.585150 | orchestrator | ok: [testbed-node-3] => (item={'id': 'e1591852d8ca974dfac692ad51320fcbceb00f19be17cff0eb4d11400f6d9dfb', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-10-09 10:57:25.585162 | orchestrator | ok: [testbed-node-3] => (item={'id': 'a4d5524b40b948c902f14b77093c70d318baa4c9bdf8d22c9c571d202ddc445f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-10-09 10:57:25.585173 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'af7e85e51fcc575b7ebf644b51bd8be4a6512165fc40d2790dd26c042a02bd68', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-10-09 10:57:25.585185 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9796fcff4cd560f208df2722326b0ff5d52a02b74c888b7cf74a2379e73dba75', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-10-09 10:57:25.585196 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5327b8247dd84fb9abae6d684f227a41a839305b7d05d566c36938c98731677b', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-10-09 10:57:25.585207 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2b09f4f8dbbceb4c49bb27764227b04c31d4b3dd160b37270cb0c306e6e0eacd', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-09 10:57:25.585225 | orchestrator | skipping: [testbed-node-3] => (item={'id': '73e528389218fc78be93627133d0f8b992c996ab5b1883a1f0e0f80306b4309d', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-09 10:57:25.585236 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8e63c2b4b16b719a9ffde3e3054094998851080e281cdaf60b606814aff2440b', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-09 10:57:25.585247 | orchestrator | skipping: [testbed-node-4] => (item={'id': '689710581fa5f2aa1c71de2f54464fae103af13b78f7f15e041bc83a30ad6ded', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-10-09 10:57:25.585265 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b3d5ac8293466f59cd74efd25c3a3021b7de695e095ddd8ba817cfa8e0eb57ff', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-10-09 10:57:25.844789 | orchestrator | skipping: [testbed-node-4] => (item={'id': '032417b698f113798fce491e7f613d0901ba67e2921aeec64f69ff5e0bb6bbb2', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-10-09 10:57:25.844919 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd89d48174cf709a2e3a998cf9f716dfc9912afacdcee981b76745baaa4374370', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-10-09 10:57:25.844948 | orchestrator | skipping: [testbed-node-4] => (item={'id': '803342dee94792cb93c2e692f1a3b5d19c9ca460cab6c712fc80d3a74c65c214', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-10-09 10:57:25.844966 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2a8aa77e9d06bd02d706e11de97130e1f75bac35b678fa829de251cc0d5bb4f4', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-10-09 10:57:25.844978 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dd232383651abd3368c631c4c5c24a663d526aceb97497179d9a7dd37648f423', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-10-09 10:57:25.844990 | orchestrator | skipping: [testbed-node-4] => (item={'id': '60fc1361dae77094e41a9e1e0f59c90c33a603101ffb85008bf9092bc3e49cb2', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2025-10-09 10:57:25.845055 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b79cf9bbd5d8d983e0c85f4c8aa1af5d0fdeb537ceb2fb2385d2c167111d2d21', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-10-09 10:57:25.845068 | orchestrator | skipping: [testbed-node-4] => (item={'id': '44a3f292033ca9eeea887afe4dbd8019ee7a93a04c0a01d8331ae52c56f513be', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-10-09 10:57:25.845080 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a0d7dd1f11ba25e41e4163201823f9ecc8bd15a6edbb4703a6cb9069eeb1ebc1', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-10-09 10:57:25.845091 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'df82795641d8b2a481d2b8552e19db45e3171bf134f9964aa3570b1908bcf12e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 25 minutes'})  2025-10-09 10:57:25.845132 | orchestrator | ok: [testbed-node-4] => (item={'id': '52a593c67ff6317176f7bb08739f61cbeab9920fc1e159b1f374ae9b06ede9d4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-10-09 10:57:25.845145 | orchestrator | ok: [testbed-node-4] => (item={'id': 'ef4e4052c21967a6445b3dd164b2ba74cf30a72b99563595dd92abb58944494a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-10-09 10:57:25.845156 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd3f3bf5561149848ba5a5e1beddb5f0884944f77fc359e6cacd301ec02ff8140', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-10-09 10:57:25.845186 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e7a2a2f84e88eb0e0ae1782b0295d513835cf921d1f628869bf1b0703cd9334b', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-10-09 10:57:25.845198 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6bb4840eafc522c0af212d46cf6e0951bbce540dd5f9c6d11f7f5e02ede24c8b', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-10-09 10:57:25.845228 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fb4c216cb13601f75c233c030bb9bf588da0a34f6f2e440e58fde831434bb2fd', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-09 10:57:25.845240 | orchestrator | skipping: [testbed-node-4] => (item={'id': '14057e2c1e015f7523218237d2de0bb6ec5fb69ea7f87e8ae02bd43239e880f6', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-09 10:57:25.845252 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ec202496a7be6e796b26a0c454ed2990a7616d057dd142765c1af2b88ae5cbc7', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-09 10:57:25.845264 | orchestrator | skipping: [testbed-node-5] => (item={'id': '30de3e76ab58368ac754c27dc08c4008ec6cd66a289520c964d720151fb2c4a4', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-10-09 10:57:25.845276 | orchestrator | skipping: [testbed-node-5] => (item={'id': '66ba6a2fbd87374593aed034e983fcd53b56b44f45b193ab34b9db38b7989d79', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-10-09 10:57:25.845288 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7493260f7de51d1f2547b8bed66cc12096cc50ca2e70e682abeb965c8f29985a', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-10-09 10:57:25.845305 | orchestrator | skipping: [testbed-node-5] => (item={'id': '85594f19564e0537cd245e6de5f60ca0624818f663e06a4a6a18f0501581670e', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-10-09 10:57:25.845318 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8fcddd3417c29d3bb7e75ce1dda2301eaefa0bdf793a67eee4709c77354bbd8f', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-10-09 10:57:25.845331 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9c648875bc9547c2090ec15c6476c0d2f684f8a96aec8df871ff12825a09f556', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-10-09 10:57:25.845352 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7a04eb3c6a69d5c0abdea9a05eb47a44a569c06f75c211e2f9a9e57d25d64e16', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-10-09 10:57:25.845364 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6da008885715c6b12218875f5eabc3e3386b35903ef047c65eaf4e41a094e371', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2025-10-09 10:57:25.845377 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7ac4b8b3cd9f27a1ecc44e178ce5512aaa67d1810b4d4eebc3a89a2667f1a1d5', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-10-09 10:57:25.845390 | orchestrator | skipping: [testbed-node-5] => (item={'id': '60074f5cf324e209b485a2aea87c8a65d2d0410cefef95873e215e35ae0b9d0f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 23 minutes'})  2025-10-09 10:57:25.845403 | orchestrator | skipping: [testbed-node-5] => (item={'id': '84dac747cc3753009780c6388021459c176b2637b11e885f6a6efe8bb4fbd439', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-10-09 10:57:25.845416 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e58672e86b17003d0b665547e2ea847a70278317eef8c1a80f7073f5023011e0', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 25 minutes'})  2025-10-09 10:57:25.845435 | orchestrator | ok: [testbed-node-5] => (item={'id': '9aa6071b8a227d265de7ddf0b5d87acbb9247e66faee6613a3b4fe781a6fada7', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-10-09 10:57:34.660814 | orchestrator | ok: [testbed-node-5] => (item={'id': '8258f799a923a9efdbe9c7bb32f71e0be8413971c967c3d586e317b37ff082b3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-10-09 10:57:34.660929 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd56d782b945ae4ccee6f1adaf3362ecb86618338811f53fe480b33a2dd58b167', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-10-09 10:57:34.660947 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2e68057290bb2fdc3b5269ea8fa101f4d4dd8d06c14535c6b0ed941499506525', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-10-09 10:57:34.660962 | orchestrator | skipping: [testbed-node-5] => (item={'id': '83817d75a81cfa6916783f612535d53b4da18cf1266e3b4139f9eda0460cdced', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-10-09 10:57:34.660974 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0eaf4077f5e93819d0c264bfd96eb699571520d3f3704177721615d7cf2d4635', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-09 10:57:34.661053 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fb1a058088cdf7c35f46ce86f4a2714ff9110df6526c0d558bff401c031c9296', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-09 10:57:34.661068 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e71a7ecf62d8ef212b227fdb9eda27bfe34685b9fc45c942c3984882eafa5e9e', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-10-09 10:57:34.661101 | orchestrator | 2025-10-09 10:57:34.661114 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-10-09 10:57:34.661127 | orchestrator | Thursday 09 October 2025 10:57:25 +0000 (0:00:00.623) 0:00:06.044 ****** 2025-10-09 10:57:34.661138 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:34.661150 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:57:34.661161 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:57:34.661172 | orchestrator | 2025-10-09 10:57:34.661183 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-10-09 10:57:34.661194 | orchestrator | Thursday 09 October 2025 10:57:26 +0000 (0:00:00.316) 0:00:06.361 ****** 2025-10-09 10:57:34.661205 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:57:34.661217 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:57:34.661228 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:57:34.661239 | orchestrator | 2025-10-09 10:57:34.661250 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-10-09 10:57:34.661261 | orchestrator | Thursday 09 October 2025 10:57:26 +0000 (0:00:00.319) 0:00:06.680 ****** 2025-10-09 10:57:34.661272 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:34.661283 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:57:34.661294 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:57:34.661305 | orchestrator | 2025-10-09 10:57:34.661316 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-10-09 10:57:34.661327 | orchestrator | Thursday 09 October 2025 10:57:27 +0000 (0:00:00.623) 0:00:07.304 ****** 2025-10-09 10:57:34.661337 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:34.661349 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:57:34.661361 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:57:34.661373 | orchestrator | 2025-10-09 10:57:34.661385 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-10-09 10:57:34.661397 | orchestrator | Thursday 09 October 2025 10:57:27 +0000 (0:00:00.347) 0:00:07.651 ****** 2025-10-09 10:57:34.661410 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-10-09 10:57:34.661423 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-10-09 10:57:34.661435 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:57:34.661447 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-10-09 10:57:34.661460 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-10-09 10:57:34.661472 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:57:34.661484 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-10-09 10:57:34.661496 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-10-09 10:57:34.661508 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:57:34.661521 | orchestrator | 2025-10-09 10:57:34.661533 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-10-09 10:57:34.661545 | orchestrator | Thursday 09 October 2025 10:57:27 +0000 (0:00:00.351) 0:00:08.003 ****** 2025-10-09 10:57:34.661557 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:34.661569 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:57:34.661581 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:57:34.661594 | orchestrator | 2025-10-09 10:57:34.661623 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-10-09 10:57:34.661636 | orchestrator | Thursday 09 October 2025 10:57:28 +0000 (0:00:00.330) 0:00:08.334 ****** 2025-10-09 10:57:34.661648 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:57:34.661661 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:57:34.661673 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:57:34.661685 | orchestrator | 2025-10-09 10:57:34.661696 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-10-09 10:57:34.661714 | orchestrator | Thursday 09 October 2025 10:57:28 +0000 (0:00:00.523) 0:00:08.857 ****** 2025-10-09 10:57:34.661725 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:57:34.661736 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:57:34.661747 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:57:34.661758 | orchestrator | 2025-10-09 10:57:34.661769 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-10-09 10:57:34.661779 | orchestrator | Thursday 09 October 2025 10:57:28 +0000 (0:00:00.331) 0:00:09.189 ****** 2025-10-09 10:57:34.661790 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:34.661801 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:57:34.661812 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:57:34.661823 | orchestrator | 2025-10-09 10:57:34.661833 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-10-09 10:57:34.661844 | orchestrator | Thursday 09 October 2025 10:57:29 +0000 (0:00:00.344) 0:00:09.534 ****** 2025-10-09 10:57:34.661855 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:57:34.661866 | orchestrator | 2025-10-09 10:57:34.661877 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-10-09 10:57:34.661888 | orchestrator | Thursday 09 October 2025 10:57:29 +0000 (0:00:00.243) 0:00:09.777 ****** 2025-10-09 10:57:34.661899 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:57:34.661910 | orchestrator | 2025-10-09 10:57:34.661921 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-10-09 10:57:34.661931 | orchestrator | Thursday 09 October 2025 10:57:29 +0000 (0:00:00.240) 0:00:10.017 ****** 2025-10-09 10:57:34.661942 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:57:34.661953 | orchestrator | 2025-10-09 10:57:34.661964 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:57:34.661975 | orchestrator | Thursday 09 October 2025 10:57:30 +0000 (0:00:00.268) 0:00:10.286 ****** 2025-10-09 10:57:34.661985 | orchestrator | 2025-10-09 10:57:34.662070 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:57:34.662083 | orchestrator | Thursday 09 October 2025 10:57:30 +0000 (0:00:00.071) 0:00:10.358 ****** 2025-10-09 10:57:34.662094 | orchestrator | 2025-10-09 10:57:34.662105 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:57:34.662116 | orchestrator | Thursday 09 October 2025 10:57:30 +0000 (0:00:00.383) 0:00:10.742 ****** 2025-10-09 10:57:34.662126 | orchestrator | 2025-10-09 10:57:34.662137 | orchestrator | TASK [Print report file information] ******************************************* 2025-10-09 10:57:34.662158 | orchestrator | Thursday 09 October 2025 10:57:30 +0000 (0:00:00.075) 0:00:10.818 ****** 2025-10-09 10:57:34.662169 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:57:34.662180 | orchestrator | 2025-10-09 10:57:34.662190 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-10-09 10:57:34.662201 | orchestrator | Thursday 09 October 2025 10:57:30 +0000 (0:00:00.275) 0:00:11.093 ****** 2025-10-09 10:57:34.662212 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:57:34.662223 | orchestrator | 2025-10-09 10:57:34.662233 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-10-09 10:57:34.662244 | orchestrator | Thursday 09 October 2025 10:57:31 +0000 (0:00:00.331) 0:00:11.425 ****** 2025-10-09 10:57:34.662254 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:34.662265 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:57:34.662276 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:57:34.662286 | orchestrator | 2025-10-09 10:57:34.662297 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-10-09 10:57:34.662308 | orchestrator | Thursday 09 October 2025 10:57:31 +0000 (0:00:00.292) 0:00:11.717 ****** 2025-10-09 10:57:34.662319 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:34.662329 | orchestrator | 2025-10-09 10:57:34.662340 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-10-09 10:57:34.662351 | orchestrator | Thursday 09 October 2025 10:57:31 +0000 (0:00:00.241) 0:00:11.959 ****** 2025-10-09 10:57:34.662369 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-10-09 10:57:34.662380 | orchestrator | 2025-10-09 10:57:34.662390 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-10-09 10:57:34.662401 | orchestrator | Thursday 09 October 2025 10:57:33 +0000 (0:00:01.646) 0:00:13.606 ****** 2025-10-09 10:57:34.662412 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:34.662422 | orchestrator | 2025-10-09 10:57:34.662433 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-10-09 10:57:34.662443 | orchestrator | Thursday 09 October 2025 10:57:33 +0000 (0:00:00.152) 0:00:13.758 ****** 2025-10-09 10:57:34.662454 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:34.662464 | orchestrator | 2025-10-09 10:57:34.662475 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-10-09 10:57:34.662486 | orchestrator | Thursday 09 October 2025 10:57:33 +0000 (0:00:00.299) 0:00:14.057 ****** 2025-10-09 10:57:34.662496 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:57:34.662507 | orchestrator | 2025-10-09 10:57:34.662517 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-10-09 10:57:34.662528 | orchestrator | Thursday 09 October 2025 10:57:33 +0000 (0:00:00.132) 0:00:14.190 ****** 2025-10-09 10:57:34.662539 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:34.662549 | orchestrator | 2025-10-09 10:57:34.662560 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-10-09 10:57:34.662571 | orchestrator | Thursday 09 October 2025 10:57:34 +0000 (0:00:00.357) 0:00:14.547 ****** 2025-10-09 10:57:34.662581 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:34.662592 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:57:34.662603 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:57:34.662613 | orchestrator | 2025-10-09 10:57:34.662624 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-10-09 10:57:34.662643 | orchestrator | Thursday 09 October 2025 10:57:34 +0000 (0:00:00.317) 0:00:14.865 ****** 2025-10-09 10:57:47.564864 | orchestrator | changed: [testbed-node-3] 2025-10-09 10:57:47.564962 | orchestrator | changed: [testbed-node-4] 2025-10-09 10:57:47.564972 | orchestrator | changed: [testbed-node-5] 2025-10-09 10:57:47.564981 | orchestrator | 2025-10-09 10:57:47.565033 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-10-09 10:57:47.565044 | orchestrator | Thursday 09 October 2025 10:57:37 +0000 (0:00:02.401) 0:00:17.267 ****** 2025-10-09 10:57:47.565052 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:47.565061 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:57:47.565068 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:57:47.565076 | orchestrator | 2025-10-09 10:57:47.565083 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-10-09 10:57:47.565091 | orchestrator | Thursday 09 October 2025 10:57:37 +0000 (0:00:00.371) 0:00:17.638 ****** 2025-10-09 10:57:47.565098 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:47.565106 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:57:47.565113 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:57:47.565121 | orchestrator | 2025-10-09 10:57:47.565128 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-10-09 10:57:47.565136 | orchestrator | Thursday 09 October 2025 10:57:38 +0000 (0:00:00.732) 0:00:18.371 ****** 2025-10-09 10:57:47.565143 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:57:47.565151 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:57:47.565158 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:57:47.565166 | orchestrator | 2025-10-09 10:57:47.565174 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-10-09 10:57:47.565219 | orchestrator | Thursday 09 October 2025 10:57:38 +0000 (0:00:00.312) 0:00:18.684 ****** 2025-10-09 10:57:47.565227 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:47.565234 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:57:47.565242 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:57:47.565249 | orchestrator | 2025-10-09 10:57:47.565256 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-10-09 10:57:47.565280 | orchestrator | Thursday 09 October 2025 10:57:38 +0000 (0:00:00.369) 0:00:19.053 ****** 2025-10-09 10:57:47.565287 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:57:47.565295 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:57:47.565302 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:57:47.565309 | orchestrator | 2025-10-09 10:57:47.565319 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-10-09 10:57:47.565327 | orchestrator | Thursday 09 October 2025 10:57:39 +0000 (0:00:00.330) 0:00:19.383 ****** 2025-10-09 10:57:47.565334 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:57:47.565341 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:57:47.565348 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:57:47.565355 | orchestrator | 2025-10-09 10:57:47.565363 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-10-09 10:57:47.565370 | orchestrator | Thursday 09 October 2025 10:57:39 +0000 (0:00:00.499) 0:00:19.882 ****** 2025-10-09 10:57:47.565377 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:47.565384 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:57:47.565392 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:57:47.565399 | orchestrator | 2025-10-09 10:57:47.565406 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-10-09 10:57:47.565413 | orchestrator | Thursday 09 October 2025 10:57:40 +0000 (0:00:00.497) 0:00:20.380 ****** 2025-10-09 10:57:47.565421 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:47.565430 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:57:47.565437 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:57:47.565445 | orchestrator | 2025-10-09 10:57:47.565453 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-10-09 10:57:47.565462 | orchestrator | Thursday 09 October 2025 10:57:40 +0000 (0:00:00.565) 0:00:20.945 ****** 2025-10-09 10:57:47.565470 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:47.565478 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:57:47.565487 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:57:47.565495 | orchestrator | 2025-10-09 10:57:47.565503 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-10-09 10:57:47.565511 | orchestrator | Thursday 09 October 2025 10:57:41 +0000 (0:00:00.306) 0:00:21.252 ****** 2025-10-09 10:57:47.565519 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:57:47.565527 | orchestrator | skipping: [testbed-node-4] 2025-10-09 10:57:47.565535 | orchestrator | skipping: [testbed-node-5] 2025-10-09 10:57:47.565543 | orchestrator | 2025-10-09 10:57:47.565551 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-10-09 10:57:47.565560 | orchestrator | Thursday 09 October 2025 10:57:41 +0000 (0:00:00.498) 0:00:21.750 ****** 2025-10-09 10:57:47.565568 | orchestrator | ok: [testbed-node-3] 2025-10-09 10:57:47.565575 | orchestrator | ok: [testbed-node-4] 2025-10-09 10:57:47.565583 | orchestrator | ok: [testbed-node-5] 2025-10-09 10:57:47.565591 | orchestrator | 2025-10-09 10:57:47.565600 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-10-09 10:57:47.565608 | orchestrator | Thursday 09 October 2025 10:57:41 +0000 (0:00:00.316) 0:00:22.067 ****** 2025-10-09 10:57:47.565616 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:57:47.565625 | orchestrator | 2025-10-09 10:57:47.565633 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-10-09 10:57:47.565640 | orchestrator | Thursday 09 October 2025 10:57:42 +0000 (0:00:00.285) 0:00:22.353 ****** 2025-10-09 10:57:47.565649 | orchestrator | skipping: [testbed-node-3] 2025-10-09 10:57:47.565657 | orchestrator | 2025-10-09 10:57:47.565665 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-10-09 10:57:47.565673 | orchestrator | Thursday 09 October 2025 10:57:42 +0000 (0:00:00.271) 0:00:22.625 ****** 2025-10-09 10:57:47.565682 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:57:47.565690 | orchestrator | 2025-10-09 10:57:47.565698 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-10-09 10:57:47.565711 | orchestrator | Thursday 09 October 2025 10:57:44 +0000 (0:00:01.726) 0:00:24.351 ****** 2025-10-09 10:57:47.565719 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:57:47.565728 | orchestrator | 2025-10-09 10:57:47.565736 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-10-09 10:57:47.565744 | orchestrator | Thursday 09 October 2025 10:57:44 +0000 (0:00:00.285) 0:00:24.637 ****** 2025-10-09 10:57:47.565765 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:57:47.565773 | orchestrator | 2025-10-09 10:57:47.565781 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:57:47.565788 | orchestrator | Thursday 09 October 2025 10:57:44 +0000 (0:00:00.289) 0:00:24.927 ****** 2025-10-09 10:57:47.565795 | orchestrator | 2025-10-09 10:57:47.565802 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:57:47.565809 | orchestrator | Thursday 09 October 2025 10:57:44 +0000 (0:00:00.067) 0:00:24.994 ****** 2025-10-09 10:57:47.565817 | orchestrator | 2025-10-09 10:57:47.565824 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-10-09 10:57:47.565831 | orchestrator | Thursday 09 October 2025 10:57:44 +0000 (0:00:00.072) 0:00:25.067 ****** 2025-10-09 10:57:47.565838 | orchestrator | 2025-10-09 10:57:47.565845 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-10-09 10:57:47.565852 | orchestrator | Thursday 09 October 2025 10:57:44 +0000 (0:00:00.076) 0:00:25.143 ****** 2025-10-09 10:57:47.565860 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-10-09 10:57:47.565867 | orchestrator | 2025-10-09 10:57:47.565874 | orchestrator | TASK [Print report file information] ******************************************* 2025-10-09 10:57:47.565881 | orchestrator | Thursday 09 October 2025 10:57:46 +0000 (0:00:01.868) 0:00:27.011 ****** 2025-10-09 10:57:47.565888 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-10-09 10:57:47.565896 | orchestrator |  "msg": [ 2025-10-09 10:57:47.565903 | orchestrator |  "Validator run completed.", 2025-10-09 10:57:47.565911 | orchestrator |  "You can find the report file here:", 2025-10-09 10:57:47.565918 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-10-09T10:57:21+00:00-report.json", 2025-10-09 10:57:47.565925 | orchestrator |  "on the following host:", 2025-10-09 10:57:47.565933 | orchestrator |  "testbed-manager" 2025-10-09 10:57:47.565940 | orchestrator |  ] 2025-10-09 10:57:47.565948 | orchestrator | } 2025-10-09 10:57:47.565955 | orchestrator | 2025-10-09 10:57:47.565962 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:57:47.565974 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-10-09 10:57:47.565982 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-10-09 10:57:47.566003 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-10-09 10:57:47.566011 | orchestrator | 2025-10-09 10:57:47.566066 | orchestrator | 2025-10-09 10:57:47.566073 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:57:47.566081 | orchestrator | Thursday 09 October 2025 10:57:47 +0000 (0:00:00.396) 0:00:27.407 ****** 2025-10-09 10:57:47.566088 | orchestrator | =============================================================================== 2025-10-09 10:57:47.566095 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.40s 2025-10-09 10:57:47.566103 | orchestrator | Write report file ------------------------------------------------------- 1.87s 2025-10-09 10:57:47.566110 | orchestrator | Aggregate test results step one ----------------------------------------- 1.73s 2025-10-09 10:57:47.566117 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.65s 2025-10-09 10:57:47.566132 | orchestrator | Create report output directory ------------------------------------------ 1.10s 2025-10-09 10:57:47.566139 | orchestrator | Get timestamp for report file ------------------------------------------- 0.86s 2025-10-09 10:57:47.566146 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.73s 2025-10-09 10:57:47.566153 | orchestrator | Prepare test data ------------------------------------------------------- 0.66s 2025-10-09 10:57:47.566160 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.62s 2025-10-09 10:57:47.566167 | orchestrator | Set test result to passed if count matches ------------------------------ 0.62s 2025-10-09 10:57:47.566175 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.62s 2025-10-09 10:57:47.566182 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.57s 2025-10-09 10:57:47.566189 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.54s 2025-10-09 10:57:47.566196 | orchestrator | Flush handlers ---------------------------------------------------------- 0.53s 2025-10-09 10:57:47.566203 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.52s 2025-10-09 10:57:47.566210 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.50s 2025-10-09 10:57:47.566218 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.50s 2025-10-09 10:57:47.566225 | orchestrator | Prepare test data ------------------------------------------------------- 0.50s 2025-10-09 10:57:47.566232 | orchestrator | Print report file information ------------------------------------------- 0.40s 2025-10-09 10:57:47.566239 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 0.37s 2025-10-09 10:57:47.905769 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-10-09 10:57:47.917761 | orchestrator | + set -e 2025-10-09 10:57:47.919447 | orchestrator | + source /opt/manager-vars.sh 2025-10-09 10:57:47.919511 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-10-09 10:57:47.919532 | orchestrator | ++ NUMBER_OF_NODES=6 2025-10-09 10:57:47.919550 | orchestrator | ++ export CEPH_VERSION=reef 2025-10-09 10:57:47.919568 | orchestrator | ++ CEPH_VERSION=reef 2025-10-09 10:57:47.919586 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-10-09 10:57:47.919606 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-10-09 10:57:47.919625 | orchestrator | ++ export MANAGER_VERSION=latest 2025-10-09 10:57:47.919643 | orchestrator | ++ MANAGER_VERSION=latest 2025-10-09 10:57:47.919662 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-10-09 10:57:47.919680 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-10-09 10:57:47.919699 | orchestrator | ++ export ARA=false 2025-10-09 10:57:47.919717 | orchestrator | ++ ARA=false 2025-10-09 10:57:47.919736 | orchestrator | ++ export DEPLOY_MODE=manager 2025-10-09 10:57:47.919754 | orchestrator | ++ DEPLOY_MODE=manager 2025-10-09 10:57:47.919773 | orchestrator | ++ export TEMPEST=false 2025-10-09 10:57:47.919791 | orchestrator | ++ TEMPEST=false 2025-10-09 10:57:47.919815 | orchestrator | ++ export IS_ZUUL=true 2025-10-09 10:57:47.919844 | orchestrator | ++ IS_ZUUL=true 2025-10-09 10:57:47.919869 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.25 2025-10-09 10:57:47.919888 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.25 2025-10-09 10:57:47.919907 | orchestrator | ++ export EXTERNAL_API=false 2025-10-09 10:57:47.919925 | orchestrator | ++ EXTERNAL_API=false 2025-10-09 10:57:47.919943 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-10-09 10:57:47.919962 | orchestrator | ++ IMAGE_USER=ubuntu 2025-10-09 10:57:47.919981 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-10-09 10:57:47.920049 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-10-09 10:57:47.920068 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-10-09 10:57:47.920088 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-10-09 10:57:47.920107 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-10-09 10:57:47.920124 | orchestrator | + source /etc/os-release 2025-10-09 10:57:47.920143 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2025-10-09 10:57:47.920162 | orchestrator | ++ NAME=Ubuntu 2025-10-09 10:57:47.920179 | orchestrator | ++ VERSION_ID=24.04 2025-10-09 10:57:47.920199 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2025-10-09 10:57:47.920217 | orchestrator | ++ VERSION_CODENAME=noble 2025-10-09 10:57:47.920236 | orchestrator | ++ ID=ubuntu 2025-10-09 10:57:47.920254 | orchestrator | ++ ID_LIKE=debian 2025-10-09 10:57:47.920272 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-10-09 10:57:47.920319 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-10-09 10:57:47.920339 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-10-09 10:57:47.920359 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-10-09 10:57:47.920378 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-10-09 10:57:47.920397 | orchestrator | ++ LOGO=ubuntu-logo 2025-10-09 10:57:47.920416 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-10-09 10:57:47.920436 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-10-09 10:57:47.920455 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-10-09 10:57:47.943548 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-10-09 10:58:13.429553 | orchestrator | 2025-10-09 10:58:13.429666 | orchestrator | # Status of Elasticsearch 2025-10-09 10:58:13.429683 | orchestrator | 2025-10-09 10:58:13.429696 | orchestrator | + pushd /opt/configuration/contrib 2025-10-09 10:58:13.429709 | orchestrator | + echo 2025-10-09 10:58:13.429721 | orchestrator | + echo '# Status of Elasticsearch' 2025-10-09 10:58:13.429750 | orchestrator | + echo 2025-10-09 10:58:13.429762 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-10-09 10:58:13.630478 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-10-09 10:58:13.630581 | orchestrator | 2025-10-09 10:58:13.630597 | orchestrator | # Status of MariaDB 2025-10-09 10:58:13.630611 | orchestrator | 2025-10-09 10:58:13.630622 | orchestrator | + echo 2025-10-09 10:58:13.630634 | orchestrator | + echo '# Status of MariaDB' 2025-10-09 10:58:13.630645 | orchestrator | + echo 2025-10-09 10:58:13.630656 | orchestrator | + MARIADB_USER=root_shard_0 2025-10-09 10:58:13.630669 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-10-09 10:58:13.710947 | orchestrator | Reading package lists... 2025-10-09 10:58:14.083972 | orchestrator | Building dependency tree... 2025-10-09 10:58:14.084552 | orchestrator | Reading state information... 2025-10-09 10:58:14.514367 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-10-09 10:58:14.514463 | orchestrator | bc set to manually installed. 2025-10-09 10:58:14.514478 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-10-09 10:58:15.279871 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-10-09 10:58:15.280669 | orchestrator | 2025-10-09 10:58:15.280707 | orchestrator | # Status of Prometheus 2025-10-09 10:58:15.280723 | orchestrator | 2025-10-09 10:58:15.280736 | orchestrator | + echo 2025-10-09 10:58:15.280750 | orchestrator | + echo '# Status of Prometheus' 2025-10-09 10:58:15.280764 | orchestrator | + echo 2025-10-09 10:58:15.280776 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-10-09 10:58:15.334169 | orchestrator | Unauthorized 2025-10-09 10:58:15.339360 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-10-09 10:58:15.419310 | orchestrator | Unauthorized 2025-10-09 10:58:15.425710 | orchestrator | 2025-10-09 10:58:15.425749 | orchestrator | # Status of RabbitMQ 2025-10-09 10:58:15.425763 | orchestrator | 2025-10-09 10:58:15.425775 | orchestrator | + echo 2025-10-09 10:58:15.425786 | orchestrator | + echo '# Status of RabbitMQ' 2025-10-09 10:58:15.425798 | orchestrator | + echo 2025-10-09 10:58:15.425810 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-10-09 10:58:15.926428 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-10-09 10:58:15.935757 | orchestrator | 2025-10-09 10:58:15.935791 | orchestrator | # Status of Redis 2025-10-09 10:58:15.935803 | orchestrator | 2025-10-09 10:58:15.935814 | orchestrator | + echo 2025-10-09 10:58:15.935824 | orchestrator | + echo '# Status of Redis' 2025-10-09 10:58:15.935835 | orchestrator | + echo 2025-10-09 10:58:15.935846 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-10-09 10:58:15.941251 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001907s;;;0.000000;10.000000 2025-10-09 10:58:15.941338 | orchestrator | + popd 2025-10-09 10:58:15.941350 | orchestrator | 2025-10-09 10:58:15.941358 | orchestrator | # Create backup of MariaDB database 2025-10-09 10:58:15.941366 | orchestrator | 2025-10-09 10:58:15.941372 | orchestrator | + echo 2025-10-09 10:58:15.941379 | orchestrator | + echo '# Create backup of MariaDB database' 2025-10-09 10:58:15.941385 | orchestrator | + echo 2025-10-09 10:58:15.941392 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-10-09 10:58:18.105103 | orchestrator | 2025-10-09 10:58:18 | INFO  | Task d0cb4e9d-11e2-4564-ab6b-1a5e6f33874b (mariadb_backup) was prepared for execution. 2025-10-09 10:58:18.105225 | orchestrator | 2025-10-09 10:58:18 | INFO  | It takes a moment until task d0cb4e9d-11e2-4564-ab6b-1a5e6f33874b (mariadb_backup) has been started and output is visible here. 2025-10-09 10:59:48.783355 | orchestrator | 2025-10-09 10:59:48.783470 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-10-09 10:59:48.783488 | orchestrator | 2025-10-09 10:59:48.783500 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-10-09 10:59:48.783512 | orchestrator | Thursday 09 October 2025 10:58:22 +0000 (0:00:00.173) 0:00:00.173 ****** 2025-10-09 10:59:48.783523 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:59:48.783536 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:59:48.783547 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:59:48.783559 | orchestrator | 2025-10-09 10:59:48.783570 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-10-09 10:59:48.783581 | orchestrator | Thursday 09 October 2025 10:58:22 +0000 (0:00:00.333) 0:00:00.507 ****** 2025-10-09 10:59:48.783592 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-10-09 10:59:48.783603 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-10-09 10:59:48.783614 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-10-09 10:59:48.783625 | orchestrator | 2025-10-09 10:59:48.783636 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-10-09 10:59:48.783647 | orchestrator | 2025-10-09 10:59:48.783658 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-10-09 10:59:48.783668 | orchestrator | Thursday 09 October 2025 10:58:23 +0000 (0:00:00.591) 0:00:01.099 ****** 2025-10-09 10:59:48.783681 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-10-09 10:59:48.783692 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-10-09 10:59:48.783703 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-10-09 10:59:48.783714 | orchestrator | 2025-10-09 10:59:48.783725 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-10-09 10:59:48.783736 | orchestrator | Thursday 09 October 2025 10:58:23 +0000 (0:00:00.405) 0:00:01.504 ****** 2025-10-09 10:59:48.783748 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-10-09 10:59:48.783759 | orchestrator | 2025-10-09 10:59:48.783770 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-10-09 10:59:48.783781 | orchestrator | Thursday 09 October 2025 10:58:24 +0000 (0:00:00.584) 0:00:02.088 ****** 2025-10-09 10:59:48.783792 | orchestrator | ok: [testbed-node-0] 2025-10-09 10:59:48.783803 | orchestrator | ok: [testbed-node-1] 2025-10-09 10:59:48.783814 | orchestrator | ok: [testbed-node-2] 2025-10-09 10:59:48.783825 | orchestrator | 2025-10-09 10:59:48.783836 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-10-09 10:59:48.783847 | orchestrator | Thursday 09 October 2025 10:58:27 +0000 (0:00:03.191) 0:00:05.280 ****** 2025-10-09 10:59:48.783858 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-10-09 10:59:48.783912 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-10-09 10:59:48.783925 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-10-09 10:59:48.783939 | orchestrator | mariadb_bootstrap_restart 2025-10-09 10:59:48.783974 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:59:48.783986 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:59:48.783997 | orchestrator | changed: [testbed-node-0] 2025-10-09 10:59:48.784009 | orchestrator | 2025-10-09 10:59:48.784019 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-10-09 10:59:48.784031 | orchestrator | skipping: no hosts matched 2025-10-09 10:59:48.784041 | orchestrator | 2025-10-09 10:59:48.784053 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-10-09 10:59:48.784063 | orchestrator | skipping: no hosts matched 2025-10-09 10:59:48.784075 | orchestrator | 2025-10-09 10:59:48.784085 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-10-09 10:59:48.784097 | orchestrator | skipping: no hosts matched 2025-10-09 10:59:48.784108 | orchestrator | 2025-10-09 10:59:48.784119 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-10-09 10:59:48.784130 | orchestrator | 2025-10-09 10:59:48.784141 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-10-09 10:59:48.784152 | orchestrator | Thursday 09 October 2025 10:59:47 +0000 (0:01:20.172) 0:01:25.453 ****** 2025-10-09 10:59:48.784187 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:59:48.784198 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:59:48.784209 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:59:48.784220 | orchestrator | 2025-10-09 10:59:48.784231 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-10-09 10:59:48.784242 | orchestrator | Thursday 09 October 2025 10:59:47 +0000 (0:00:00.312) 0:01:25.765 ****** 2025-10-09 10:59:48.784253 | orchestrator | skipping: [testbed-node-0] 2025-10-09 10:59:48.784265 | orchestrator | skipping: [testbed-node-1] 2025-10-09 10:59:48.784276 | orchestrator | skipping: [testbed-node-2] 2025-10-09 10:59:48.784287 | orchestrator | 2025-10-09 10:59:48.784298 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 10:59:48.784310 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-10-09 10:59:48.784322 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-09 10:59:48.784334 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-10-09 10:59:48.784345 | orchestrator | 2025-10-09 10:59:48.784356 | orchestrator | 2025-10-09 10:59:48.784368 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 10:59:48.784379 | orchestrator | Thursday 09 October 2025 10:59:48 +0000 (0:00:00.458) 0:01:26.224 ****** 2025-10-09 10:59:48.784390 | orchestrator | =============================================================================== 2025-10-09 10:59:48.784401 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 80.17s 2025-10-09 10:59:48.784430 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.19s 2025-10-09 10:59:48.784441 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2025-10-09 10:59:48.784452 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.58s 2025-10-09 10:59:48.784463 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.46s 2025-10-09 10:59:48.784474 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.41s 2025-10-09 10:59:48.784485 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-10-09 10:59:48.784512 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.31s 2025-10-09 10:59:49.199394 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-10-09 10:59:49.209080 | orchestrator | + set -e 2025-10-09 10:59:49.209144 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-10-09 10:59:49.209224 | orchestrator | ++ export INTERACTIVE=false 2025-10-09 10:59:49.209263 | orchestrator | ++ INTERACTIVE=false 2025-10-09 10:59:49.209275 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-10-09 10:59:49.209286 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-10-09 10:59:49.209297 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-10-09 10:59:49.210118 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-10-09 10:59:49.213379 | orchestrator | 2025-10-09 10:59:49.213402 | orchestrator | # OpenStack endpoints 2025-10-09 10:59:49.213414 | orchestrator | 2025-10-09 10:59:49.213425 | orchestrator | ++ export MANAGER_VERSION=latest 2025-10-09 10:59:49.213437 | orchestrator | ++ MANAGER_VERSION=latest 2025-10-09 10:59:49.213447 | orchestrator | + export OS_CLOUD=admin 2025-10-09 10:59:49.213458 | orchestrator | + OS_CLOUD=admin 2025-10-09 10:59:49.213470 | orchestrator | + echo 2025-10-09 10:59:49.213481 | orchestrator | + echo '# OpenStack endpoints' 2025-10-09 10:59:49.213492 | orchestrator | + echo 2025-10-09 10:59:49.213503 | orchestrator | + openstack endpoint list 2025-10-09 10:59:52.752752 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-10-09 10:59:52.752852 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-10-09 10:59:52.752867 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-10-09 10:59:52.752879 | orchestrator | | 09b570f3e9394c3583bcfb03e129d418 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-10-09 10:59:52.752907 | orchestrator | | 0dc449cbd5e04770bb6f510e3d7236a0 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-10-09 10:59:52.752918 | orchestrator | | 19335d6d13a5414b9bcd5145f077f02f | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-10-09 10:59:52.752929 | orchestrator | | 20dfa85e35844e94977d18b9d7e2f042 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-10-09 10:59:52.752940 | orchestrator | | 3a8ce1ca48db4418a540dd3eec2d82dc | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-10-09 10:59:52.752951 | orchestrator | | 3e12a6906f57447d9bb276f2efd91e0a | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-10-09 10:59:52.752962 | orchestrator | | 4c5f4cd0fba84d5e8da2cda66373c633 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-10-09 10:59:52.752972 | orchestrator | | 4d56866e044c45f89ab535c9bd7e3ed6 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-10-09 10:59:52.752983 | orchestrator | | 5588bc097ec947ec9ae8659aee53d927 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-10-09 10:59:52.752994 | orchestrator | | 72f3d7f57bf14125a7f75e82e40d69dd | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-10-09 10:59:52.753005 | orchestrator | | 7317436bd60d4b208ff87de59c172149 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-10-09 10:59:52.753015 | orchestrator | | 8c80451aeb6c4733a1f25940b7f47a87 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-10-09 10:59:52.753026 | orchestrator | | af43e514631d4a77a2cd12301c58c286 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-10-09 10:59:52.753058 | orchestrator | | b413de87218a46b4b0ca693827d5fa25 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-10-09 10:59:52.753070 | orchestrator | | b87cebc010ec4b7d9c4ba6b4a2d808ff | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-10-09 10:59:52.753080 | orchestrator | | b9a633a28d9f4d2ea00796d120768760 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-10-09 10:59:52.753091 | orchestrator | | dc8316c528134f1fbd3302a3e720b384 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-10-09 10:59:52.753102 | orchestrator | | e3711445f8b4462d93e522632d01cd40 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-10-09 10:59:52.753112 | orchestrator | | e3cc73d622c049bfad4f02d493d26ca3 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-10-09 10:59:52.753123 | orchestrator | | ed53cabd65aa4a4da9d29f1fbae832a9 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-10-09 10:59:52.753150 | orchestrator | | edea1d1afa2f46abbee7d3427bd89854 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-10-09 10:59:52.753197 | orchestrator | | f75408d26de64120ae9118141014654d | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-10-09 10:59:52.753209 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-10-09 10:59:53.037381 | orchestrator | 2025-10-09 10:59:53.037430 | orchestrator | # Cinder 2025-10-09 10:59:53.037444 | orchestrator | 2025-10-09 10:59:53.037456 | orchestrator | + echo 2025-10-09 10:59:53.037467 | orchestrator | + echo '# Cinder' 2025-10-09 10:59:53.037480 | orchestrator | + echo 2025-10-09 10:59:53.037492 | orchestrator | + openstack volume service list 2025-10-09 10:59:55.853237 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-10-09 10:59:55.853355 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-10-09 10:59:55.853389 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-10-09 10:59:55.853402 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-10-09T10:59:53.000000 | 2025-10-09 10:59:55.853413 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-10-09T10:59:54.000000 | 2025-10-09 10:59:55.853424 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-10-09T10:59:55.000000 | 2025-10-09 10:59:55.853435 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-10-09T10:59:54.000000 | 2025-10-09 10:59:55.853446 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-10-09T10:59:54.000000 | 2025-10-09 10:59:55.853457 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-10-09T10:59:55.000000 | 2025-10-09 10:59:55.853467 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-10-09T10:59:54.000000 | 2025-10-09 10:59:55.853478 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-10-09T10:59:54.000000 | 2025-10-09 10:59:55.853489 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-10-09T10:59:55.000000 | 2025-10-09 10:59:55.853500 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-10-09 10:59:56.144326 | orchestrator | 2025-10-09 10:59:56.144423 | orchestrator | # Neutron 2025-10-09 10:59:56.144438 | orchestrator | 2025-10-09 10:59:56.144450 | orchestrator | + echo 2025-10-09 10:59:56.144461 | orchestrator | + echo '# Neutron' 2025-10-09 10:59:56.144474 | orchestrator | + echo 2025-10-09 10:59:56.144486 | orchestrator | + openstack network agent list 2025-10-09 10:59:59.024582 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-10-09 10:59:59.024711 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-10-09 10:59:59.024726 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-10-09 10:59:59.024737 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-10-09 10:59:59.024748 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-10-09 10:59:59.024759 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-10-09 10:59:59.024770 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-10-09 10:59:59.024781 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-10-09 10:59:59.024792 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-10-09 10:59:59.024803 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-10-09 10:59:59.024813 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-10-09 10:59:59.024824 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-10-09 10:59:59.024835 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-10-09 10:59:59.330088 | orchestrator | + openstack network service provider list 2025-10-09 11:00:01.943227 | orchestrator | +---------------+------+---------+ 2025-10-09 11:00:01.943344 | orchestrator | | Service Type | Name | Default | 2025-10-09 11:00:01.943359 | orchestrator | +---------------+------+---------+ 2025-10-09 11:00:01.943371 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-10-09 11:00:01.943382 | orchestrator | +---------------+------+---------+ 2025-10-09 11:00:02.265385 | orchestrator | 2025-10-09 11:00:02.265578 | orchestrator | + echo 2025-10-09 11:00:02.265609 | orchestrator | + echo '# Nova' 2025-10-09 11:00:02.265756 | orchestrator | # Nova 2025-10-09 11:00:02.265774 | orchestrator | 2025-10-09 11:00:02.265785 | orchestrator | + echo 2025-10-09 11:00:02.265797 | orchestrator | + openstack compute service list 2025-10-09 11:00:05.124654 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-10-09 11:00:05.124751 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-10-09 11:00:05.124766 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-10-09 11:00:05.124778 | orchestrator | | 5493384d-2231-4e48-a69f-c74f58bca1bf | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-10-09T10:59:54.000000 | 2025-10-09 11:00:05.124804 | orchestrator | | de9aaf1d-30db-4850-8ed2-68a8915a63f3 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-10-09T10:59:58.000000 | 2025-10-09 11:00:05.124836 | orchestrator | | 77bd8456-6138-4d10-a09c-a2e74f676667 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-10-09T11:00:04.000000 | 2025-10-09 11:00:05.124848 | orchestrator | | c5daf059-c542-47e3-aa65-3b8cb70484a1 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-10-09T11:00:04.000000 | 2025-10-09 11:00:05.124860 | orchestrator | | 2dbeeb0b-fa91-4cc4-a5fe-0c00a35181c8 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-10-09T11:00:04.000000 | 2025-10-09 11:00:05.124870 | orchestrator | | 5aeb428f-337e-48d8-9d32-32d4ea3da884 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-10-09T10:59:59.000000 | 2025-10-09 11:00:05.124881 | orchestrator | | 43a25018-1809-4fba-be8c-ee5d8445e11a | nova-compute | testbed-node-4 | nova | enabled | up | 2025-10-09T10:59:55.000000 | 2025-10-09 11:00:05.124892 | orchestrator | | 527f228d-f4ed-4154-a139-1b747447692b | nova-compute | testbed-node-3 | nova | enabled | up | 2025-10-09T10:59:55.000000 | 2025-10-09 11:00:05.124903 | orchestrator | | 14eb78b7-ed2d-4426-a639-d22d24d58822 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-10-09T10:59:55.000000 | 2025-10-09 11:00:05.124914 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-10-09 11:00:05.465499 | orchestrator | + openstack hypervisor list 2025-10-09 11:00:08.289387 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-10-09 11:00:08.289478 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-10-09 11:00:08.289494 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-10-09 11:00:08.289505 | orchestrator | | 36a987e5-28c5-4c91-b5bd-1d8f07850832 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-10-09 11:00:08.289516 | orchestrator | | bc1da0e7-85c3-4da9-a486-af57593f2796 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-10-09 11:00:08.289528 | orchestrator | | 9b6e2822-3fda-4759-8323-22409220eb51 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-10-09 11:00:08.289539 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-10-09 11:00:08.617223 | orchestrator | 2025-10-09 11:00:08.617287 | orchestrator | # Run OpenStack test play 2025-10-09 11:00:08.617301 | orchestrator | 2025-10-09 11:00:08.617313 | orchestrator | + echo 2025-10-09 11:00:08.617325 | orchestrator | + echo '# Run OpenStack test play' 2025-10-09 11:00:08.617337 | orchestrator | + echo 2025-10-09 11:00:08.617348 | orchestrator | + osism apply --environment openstack test 2025-10-09 11:00:10.687541 | orchestrator | 2025-10-09 11:00:10 | INFO  | Trying to run play test in environment openstack 2025-10-09 11:00:20.798702 | orchestrator | 2025-10-09 11:00:20 | INFO  | Task 0d90e26a-7f7c-4b62-ac5f-64396a588230 (test) was prepared for execution. 2025-10-09 11:00:20.798820 | orchestrator | 2025-10-09 11:00:20 | INFO  | It takes a moment until task 0d90e26a-7f7c-4b62-ac5f-64396a588230 (test) has been started and output is visible here. 2025-10-09 11:07:38.197434 | orchestrator | 2025-10-09 11:07:38.197543 | orchestrator | PLAY [Create test project] ***************************************************** 2025-10-09 11:07:38.197560 | orchestrator | 2025-10-09 11:07:38.197601 | orchestrator | TASK [Create test domain] ****************************************************** 2025-10-09 11:07:38.197615 | orchestrator | Thursday 09 October 2025 11:00:25 +0000 (0:00:00.080) 0:00:00.080 ****** 2025-10-09 11:07:38.197626 | orchestrator | changed: [localhost] 2025-10-09 11:07:38.197639 | orchestrator | 2025-10-09 11:07:38.197650 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-10-09 11:07:38.197661 | orchestrator | Thursday 09 October 2025 11:00:29 +0000 (0:00:03.876) 0:00:03.957 ****** 2025-10-09 11:07:38.197672 | orchestrator | changed: [localhost] 2025-10-09 11:07:38.197683 | orchestrator | 2025-10-09 11:07:38.197694 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-10-09 11:07:38.197729 | orchestrator | Thursday 09 October 2025 11:00:33 +0000 (0:00:04.424) 0:00:08.381 ****** 2025-10-09 11:07:38.197741 | orchestrator | changed: [localhost] 2025-10-09 11:07:38.197752 | orchestrator | 2025-10-09 11:07:38.197763 | orchestrator | TASK [Create test project] ***************************************************** 2025-10-09 11:07:38.197774 | orchestrator | Thursday 09 October 2025 11:00:40 +0000 (0:00:06.955) 0:00:15.337 ****** 2025-10-09 11:07:38.197785 | orchestrator | changed: [localhost] 2025-10-09 11:07:38.197796 | orchestrator | 2025-10-09 11:07:38.197806 | orchestrator | TASK [Create test user] ******************************************************** 2025-10-09 11:07:38.197817 | orchestrator | Thursday 09 October 2025 11:00:44 +0000 (0:00:04.075) 0:00:19.412 ****** 2025-10-09 11:07:38.197828 | orchestrator | changed: [localhost] 2025-10-09 11:07:38.197839 | orchestrator | 2025-10-09 11:07:38.197850 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-10-09 11:07:38.197860 | orchestrator | Thursday 09 October 2025 11:00:48 +0000 (0:00:04.286) 0:00:23.698 ****** 2025-10-09 11:07:38.197871 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-10-09 11:07:38.197883 | orchestrator | changed: [localhost] => (item=member) 2025-10-09 11:07:38.197896 | orchestrator | changed: [localhost] => (item=creator) 2025-10-09 11:07:38.197907 | orchestrator | 2025-10-09 11:07:38.197918 | orchestrator | TASK [Create test server group] ************************************************ 2025-10-09 11:07:38.197929 | orchestrator | Thursday 09 October 2025 11:01:01 +0000 (0:00:12.567) 0:00:36.265 ****** 2025-10-09 11:07:38.197939 | orchestrator | changed: [localhost] 2025-10-09 11:07:38.197950 | orchestrator | 2025-10-09 11:07:38.197961 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-10-09 11:07:38.197974 | orchestrator | Thursday 09 October 2025 11:01:06 +0000 (0:00:05.045) 0:00:41.310 ****** 2025-10-09 11:07:38.197992 | orchestrator | changed: [localhost] 2025-10-09 11:07:38.198011 | orchestrator | 2025-10-09 11:07:38.198109 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-10-09 11:07:38.198124 | orchestrator | Thursday 09 October 2025 11:01:11 +0000 (0:00:05.162) 0:00:46.473 ****** 2025-10-09 11:07:38.198137 | orchestrator | changed: [localhost] 2025-10-09 11:07:38.198149 | orchestrator | 2025-10-09 11:07:38.198162 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-10-09 11:07:38.198176 | orchestrator | Thursday 09 October 2025 11:01:15 +0000 (0:00:04.356) 0:00:50.829 ****** 2025-10-09 11:07:38.198188 | orchestrator | changed: [localhost] 2025-10-09 11:07:38.198200 | orchestrator | 2025-10-09 11:07:38.198212 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-10-09 11:07:38.198225 | orchestrator | Thursday 09 October 2025 11:01:20 +0000 (0:00:04.090) 0:00:54.919 ****** 2025-10-09 11:07:38.198237 | orchestrator | changed: [localhost] 2025-10-09 11:07:38.198249 | orchestrator | 2025-10-09 11:07:38.198261 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-10-09 11:07:38.198273 | orchestrator | Thursday 09 October 2025 11:01:24 +0000 (0:00:04.241) 0:00:59.161 ****** 2025-10-09 11:07:38.198286 | orchestrator | changed: [localhost] 2025-10-09 11:07:38.198298 | orchestrator | 2025-10-09 11:07:38.198310 | orchestrator | TASK [Create test network topology] ******************************************** 2025-10-09 11:07:38.198323 | orchestrator | Thursday 09 October 2025 11:01:28 +0000 (0:00:03.811) 0:01:02.972 ****** 2025-10-09 11:07:38.198335 | orchestrator | changed: [localhost] 2025-10-09 11:07:38.198347 | orchestrator | 2025-10-09 11:07:38.198359 | orchestrator | TASK [Create test instances] *************************************************** 2025-10-09 11:07:38.198370 | orchestrator | Thursday 09 October 2025 11:01:45 +0000 (0:00:17.228) 0:01:20.201 ****** 2025-10-09 11:07:38.198381 | orchestrator | changed: [localhost] => (item=test) 2025-10-09 11:07:38.198393 | orchestrator | changed: [localhost] => (item=test-1) 2025-10-09 11:07:38.198404 | orchestrator | 2025-10-09 11:07:38.198415 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-10-09 11:07:38.198426 | orchestrator | 2025-10-09 11:07:38.198437 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-10-09 11:07:38.198458 | orchestrator | changed: [localhost] => (item=test-2) 2025-10-09 11:07:38.198470 | orchestrator | 2025-10-09 11:07:38.198480 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-10-09 11:07:38.198491 | orchestrator | changed: [localhost] => (item=test-3) 2025-10-09 11:07:38.198502 | orchestrator | 2025-10-09 11:07:38.198513 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-10-09 11:07:38.198524 | orchestrator | 2025-10-09 11:07:38.198535 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-10-09 11:07:38.198546 | orchestrator | changed: [localhost] => (item=test-4) 2025-10-09 11:07:38.198557 | orchestrator | 2025-10-09 11:07:38.198568 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-10-09 11:07:38.198605 | orchestrator | Thursday 09 October 2025 11:06:09 +0000 (0:04:24.608) 0:05:44.809 ****** 2025-10-09 11:07:38.198617 | orchestrator | changed: [localhost] => (item=test) 2025-10-09 11:07:38.198632 | orchestrator | changed: [localhost] => (item=test-1) 2025-10-09 11:07:38.198643 | orchestrator | changed: [localhost] => (item=test-2) 2025-10-09 11:07:38.198654 | orchestrator | changed: [localhost] => (item=test-3) 2025-10-09 11:07:38.198665 | orchestrator | changed: [localhost] => (item=test-4) 2025-10-09 11:07:38.198676 | orchestrator | 2025-10-09 11:07:38.198688 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-10-09 11:07:38.198718 | orchestrator | Thursday 09 October 2025 11:06:35 +0000 (0:00:25.702) 0:06:10.512 ****** 2025-10-09 11:07:38.198730 | orchestrator | changed: [localhost] => (item=test) 2025-10-09 11:07:38.198741 | orchestrator | changed: [localhost] => (item=test-1) 2025-10-09 11:07:38.198752 | orchestrator | changed: [localhost] => (item=test-2) 2025-10-09 11:07:38.198763 | orchestrator | changed: [localhost] => (item=test-3) 2025-10-09 11:07:38.198773 | orchestrator | changed: [localhost] => (item=test-4) 2025-10-09 11:07:38.198784 | orchestrator | 2025-10-09 11:07:38.198795 | orchestrator | TASK [Create test volume] ****************************************************** 2025-10-09 11:07:38.198806 | orchestrator | Thursday 09 October 2025 11:07:11 +0000 (0:00:36.018) 0:06:46.530 ****** 2025-10-09 11:07:38.198817 | orchestrator | changed: [localhost] 2025-10-09 11:07:38.198828 | orchestrator | 2025-10-09 11:07:38.198839 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-10-09 11:07:38.198850 | orchestrator | Thursday 09 October 2025 11:07:18 +0000 (0:00:06.677) 0:06:53.208 ****** 2025-10-09 11:07:38.198861 | orchestrator | changed: [localhost] 2025-10-09 11:07:38.198872 | orchestrator | 2025-10-09 11:07:38.198883 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-10-09 11:07:38.198894 | orchestrator | Thursday 09 October 2025 11:07:32 +0000 (0:00:13.854) 0:07:07.062 ****** 2025-10-09 11:07:38.198905 | orchestrator | ok: [localhost] 2025-10-09 11:07:38.198917 | orchestrator | 2025-10-09 11:07:38.198928 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-10-09 11:07:38.198939 | orchestrator | Thursday 09 October 2025 11:07:37 +0000 (0:00:05.690) 0:07:12.753 ****** 2025-10-09 11:07:38.198950 | orchestrator | ok: [localhost] => { 2025-10-09 11:07:38.198961 | orchestrator |  "msg": "192.168.112.100" 2025-10-09 11:07:38.198973 | orchestrator | } 2025-10-09 11:07:38.198984 | orchestrator | 2025-10-09 11:07:38.198995 | orchestrator | PLAY RECAP ********************************************************************* 2025-10-09 11:07:38.199021 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-10-09 11:07:38.199034 | orchestrator | 2025-10-09 11:07:38.199045 | orchestrator | 2025-10-09 11:07:38.199057 | orchestrator | TASKS RECAP ******************************************************************** 2025-10-09 11:07:38.199068 | orchestrator | Thursday 09 October 2025 11:07:37 +0000 (0:00:00.040) 0:07:12.794 ****** 2025-10-09 11:07:38.199078 | orchestrator | =============================================================================== 2025-10-09 11:07:38.199094 | orchestrator | Create test instances ------------------------------------------------- 264.61s 2025-10-09 11:07:38.199113 | orchestrator | Add tag to instances --------------------------------------------------- 36.02s 2025-10-09 11:07:38.199124 | orchestrator | Add metadata to instances ---------------------------------------------- 25.70s 2025-10-09 11:07:38.199135 | orchestrator | Create test network topology ------------------------------------------- 17.23s 2025-10-09 11:07:38.199146 | orchestrator | Attach test volume ----------------------------------------------------- 13.85s 2025-10-09 11:07:38.199157 | orchestrator | Add member roles to user test ------------------------------------------ 12.57s 2025-10-09 11:07:38.199167 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.96s 2025-10-09 11:07:38.199178 | orchestrator | Create test volume ------------------------------------------------------ 6.68s 2025-10-09 11:07:38.199189 | orchestrator | Create floating ip address ---------------------------------------------- 5.69s 2025-10-09 11:07:38.199200 | orchestrator | Create ssh security group ----------------------------------------------- 5.16s 2025-10-09 11:07:38.199211 | orchestrator | Create test server group ------------------------------------------------ 5.05s 2025-10-09 11:07:38.199222 | orchestrator | Create test-admin user -------------------------------------------------- 4.42s 2025-10-09 11:07:38.199232 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.36s 2025-10-09 11:07:38.199243 | orchestrator | Create test user -------------------------------------------------------- 4.29s 2025-10-09 11:07:38.199254 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.24s 2025-10-09 11:07:38.199265 | orchestrator | Create icmp security group ---------------------------------------------- 4.09s 2025-10-09 11:07:38.199276 | orchestrator | Create test project ----------------------------------------------------- 4.08s 2025-10-09 11:07:38.199287 | orchestrator | Create test domain ------------------------------------------------------ 3.88s 2025-10-09 11:07:38.199298 | orchestrator | Create test keypair ----------------------------------------------------- 3.81s 2025-10-09 11:07:38.199309 | orchestrator | Print floating ip address ----------------------------------------------- 0.04s 2025-10-09 11:07:38.587238 | orchestrator | + server_list 2025-10-09 11:07:38.587329 | orchestrator | + openstack --os-cloud test server list 2025-10-09 11:07:42.673699 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-10-09 11:07:42.673792 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-10-09 11:07:42.673807 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-10-09 11:07:42.673819 | orchestrator | | b476cfa3-6ba7-408c-b421-541e2f64a37e | test-4 | ACTIVE | auto_allocated_network=10.42.0.5, 192.168.112.182 | N/A (booted from volume) | SCS-1L-1 | 2025-10-09 11:07:42.673831 | orchestrator | | cf1d5a4d-7509-412b-99ba-30d9c0cbc51c | test-3 | ACTIVE | auto_allocated_network=10.42.0.33, 192.168.112.163 | N/A (booted from volume) | SCS-1L-1 | 2025-10-09 11:07:42.673842 | orchestrator | | f09ee639-aaf0-49dd-842f-d8c066411fb7 | test-2 | ACTIVE | auto_allocated_network=10.42.0.62, 192.168.112.160 | N/A (booted from volume) | SCS-1L-1 | 2025-10-09 11:07:42.673853 | orchestrator | | c4a5f8ca-870c-4e38-a0b0-e315217fea45 | test-1 | ACTIVE | auto_allocated_network=10.42.0.38, 192.168.112.109 | N/A (booted from volume) | SCS-1L-1 | 2025-10-09 11:07:42.673864 | orchestrator | | dacfd5ec-a5e5-4bf1-8601-753b43c4d777 | test | ACTIVE | auto_allocated_network=10.42.0.10, 192.168.112.100 | N/A (booted from volume) | SCS-1L-1 | 2025-10-09 11:07:42.673875 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-10-09 11:07:43.013458 | orchestrator | + openstack --os-cloud test server show test 2025-10-09 11:07:46.421506 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:07:46.421691 | orchestrator | | Field | Value | 2025-10-09 11:07:46.421715 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:07:46.421728 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-10-09 11:07:46.421740 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-10-09 11:07:46.421751 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-10-09 11:07:46.421762 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-10-09 11:07:46.421774 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-10-09 11:07:46.421784 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-10-09 11:07:46.421813 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-10-09 11:07:46.421833 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-10-09 11:07:46.421844 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-10-09 11:07:46.421859 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-10-09 11:07:46.421871 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-10-09 11:07:46.421882 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-10-09 11:07:46.421893 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-10-09 11:07:46.421904 | orchestrator | | OS-EXT-STS:task_state | None | 2025-10-09 11:07:46.421915 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-10-09 11:07:46.421926 | orchestrator | | OS-SRV-USG:launched_at | 2025-10-09T11:02:29.000000 | 2025-10-09 11:07:46.421950 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-10-09 11:07:46.421963 | orchestrator | | accessIPv4 | | 2025-10-09 11:07:46.421974 | orchestrator | | accessIPv6 | | 2025-10-09 11:07:46.421989 | orchestrator | | addresses | auto_allocated_network=10.42.0.10, 192.168.112.100 | 2025-10-09 11:07:46.422001 | orchestrator | | config_drive | | 2025-10-09 11:07:46.422013 | orchestrator | | created | 2025-10-09T11:01:53Z | 2025-10-09 11:07:46.422106 | orchestrator | | description | None | 2025-10-09 11:07:46.422120 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-10-09 11:07:46.422133 | orchestrator | | hostId | 5d43f5574222daf8d4c2fd41e4c8aec93b82a62abd0bee188087da8d | 2025-10-09 11:07:46.422146 | orchestrator | | host_status | None | 2025-10-09 11:07:46.422176 | orchestrator | | id | dacfd5ec-a5e5-4bf1-8601-753b43c4d777 | 2025-10-09 11:07:46.422190 | orchestrator | | image | N/A (booted from volume) | 2025-10-09 11:07:46.422203 | orchestrator | | key_name | test | 2025-10-09 11:07:46.422216 | orchestrator | | locked | False | 2025-10-09 11:07:46.422229 | orchestrator | | locked_reason | None | 2025-10-09 11:07:46.422242 | orchestrator | | name | test | 2025-10-09 11:07:46.422255 | orchestrator | | pinned_availability_zone | None | 2025-10-09 11:07:46.422267 | orchestrator | | progress | 0 | 2025-10-09 11:07:46.422280 | orchestrator | | project_id | d4ec225126664242ae35543099df8628 | 2025-10-09 11:07:46.422299 | orchestrator | | properties | hostname='test' | 2025-10-09 11:07:46.422325 | orchestrator | | security_groups | name='ssh' | 2025-10-09 11:07:46.422339 | orchestrator | | | name='icmp' | 2025-10-09 11:07:46.422352 | orchestrator | | server_groups | None | 2025-10-09 11:07:46.422374 | orchestrator | | status | ACTIVE | 2025-10-09 11:07:46.422386 | orchestrator | | tags | test | 2025-10-09 11:07:46.422397 | orchestrator | | trusted_image_certificates | None | 2025-10-09 11:07:46.422409 | orchestrator | | updated | 2025-10-09T11:06:15Z | 2025-10-09 11:07:46.422420 | orchestrator | | user_id | 12e855cb161b4deb957f9af1a7a2bf97 | 2025-10-09 11:07:46.422438 | orchestrator | | volumes_attached | delete_on_termination='True', id='8c0cf361-8d66-44f5-9cdf-a037d17b723b' | 2025-10-09 11:07:46.422450 | orchestrator | | | delete_on_termination='False', id='7fe94df7-1268-4e23-94e1-1d0b387f0bc5' | 2025-10-09 11:07:46.425114 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:07:46.752353 | orchestrator | + openstack --os-cloud test server show test-1 2025-10-09 11:07:49.836727 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:07:49.836846 | orchestrator | | Field | Value | 2025-10-09 11:07:49.836864 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:07:49.836876 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-10-09 11:07:49.836888 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-10-09 11:07:49.836899 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-10-09 11:07:49.836930 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-10-09 11:07:49.836943 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-10-09 11:07:49.836954 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-10-09 11:07:49.836983 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-10-09 11:07:49.836995 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-10-09 11:07:49.837011 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-10-09 11:07:49.837023 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-10-09 11:07:49.837035 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-10-09 11:07:49.837046 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-10-09 11:07:49.837066 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-10-09 11:07:49.837077 | orchestrator | | OS-EXT-STS:task_state | None | 2025-10-09 11:07:49.837088 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-10-09 11:07:49.837100 | orchestrator | | OS-SRV-USG:launched_at | 2025-10-09T11:03:28.000000 | 2025-10-09 11:07:49.837118 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-10-09 11:07:49.837129 | orchestrator | | accessIPv4 | | 2025-10-09 11:07:49.837145 | orchestrator | | accessIPv6 | | 2025-10-09 11:07:49.837157 | orchestrator | | addresses | auto_allocated_network=10.42.0.38, 192.168.112.109 | 2025-10-09 11:07:49.837168 | orchestrator | | config_drive | | 2025-10-09 11:07:49.837180 | orchestrator | | created | 2025-10-09T11:02:54Z | 2025-10-09 11:07:49.837199 | orchestrator | | description | None | 2025-10-09 11:07:49.837210 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-10-09 11:07:49.837223 | orchestrator | | hostId | dd86de8516fa930bec645fc156859d9b22fdffe5880a9ddee21abb4a | 2025-10-09 11:07:49.837234 | orchestrator | | host_status | None | 2025-10-09 11:07:49.837252 | orchestrator | | id | c4a5f8ca-870c-4e38-a0b0-e315217fea45 | 2025-10-09 11:07:49.837266 | orchestrator | | image | N/A (booted from volume) | 2025-10-09 11:07:49.837298 | orchestrator | | key_name | test | 2025-10-09 11:07:49.837312 | orchestrator | | locked | False | 2025-10-09 11:07:49.837325 | orchestrator | | locked_reason | None | 2025-10-09 11:07:49.837346 | orchestrator | | name | test-1 | 2025-10-09 11:07:49.837359 | orchestrator | | pinned_availability_zone | None | 2025-10-09 11:07:49.837372 | orchestrator | | progress | 0 | 2025-10-09 11:07:49.837385 | orchestrator | | project_id | d4ec225126664242ae35543099df8628 | 2025-10-09 11:07:49.837398 | orchestrator | | properties | hostname='test-1' | 2025-10-09 11:07:49.837417 | orchestrator | | security_groups | name='ssh' | 2025-10-09 11:07:49.837431 | orchestrator | | | name='icmp' | 2025-10-09 11:07:49.837444 | orchestrator | | server_groups | None | 2025-10-09 11:07:49.837457 | orchestrator | | status | ACTIVE | 2025-10-09 11:07:49.837490 | orchestrator | | tags | test | 2025-10-09 11:07:49.837504 | orchestrator | | trusted_image_certificates | None | 2025-10-09 11:07:49.837518 | orchestrator | | updated | 2025-10-09T11:06:20Z | 2025-10-09 11:07:49.837531 | orchestrator | | user_id | 12e855cb161b4deb957f9af1a7a2bf97 | 2025-10-09 11:07:49.837544 | orchestrator | | volumes_attached | delete_on_termination='True', id='56d31ecf-975f-46a8-b321-13a1db1addf9' | 2025-10-09 11:07:49.841055 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:07:50.113798 | orchestrator | + openstack --os-cloud test server show test-2 2025-10-09 11:07:53.166108 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:07:53.166208 | orchestrator | | Field | Value | 2025-10-09 11:07:53.166234 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:07:53.166268 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-10-09 11:07:53.166281 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-10-09 11:07:53.166292 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-10-09 11:07:53.166303 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-10-09 11:07:53.166314 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-10-09 11:07:53.166326 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-10-09 11:07:53.166355 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-10-09 11:07:53.166367 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-10-09 11:07:53.166378 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-10-09 11:07:53.166405 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-10-09 11:07:53.166416 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-10-09 11:07:53.166427 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-10-09 11:07:53.166439 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-10-09 11:07:53.166450 | orchestrator | | OS-EXT-STS:task_state | None | 2025-10-09 11:07:53.166461 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-10-09 11:07:53.166472 | orchestrator | | OS-SRV-USG:launched_at | 2025-10-09T11:04:26.000000 | 2025-10-09 11:07:53.166519 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-10-09 11:07:53.166533 | orchestrator | | accessIPv4 | | 2025-10-09 11:07:53.166544 | orchestrator | | accessIPv6 | | 2025-10-09 11:07:53.166569 | orchestrator | | addresses | auto_allocated_network=10.42.0.62, 192.168.112.160 | 2025-10-09 11:07:53.166606 | orchestrator | | config_drive | | 2025-10-09 11:07:53.166620 | orchestrator | | created | 2025-10-09T11:03:51Z | 2025-10-09 11:07:53.166633 | orchestrator | | description | None | 2025-10-09 11:07:53.166646 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-10-09 11:07:53.166659 | orchestrator | | hostId | 241f81ec0d7af74d7cbecef0d03f5875595ec70815c07c4bb00bda0d | 2025-10-09 11:07:53.166672 | orchestrator | | host_status | None | 2025-10-09 11:07:53.166693 | orchestrator | | id | f09ee639-aaf0-49dd-842f-d8c066411fb7 | 2025-10-09 11:07:53.166707 | orchestrator | | image | N/A (booted from volume) | 2025-10-09 11:07:53.166726 | orchestrator | | key_name | test | 2025-10-09 11:07:53.166742 | orchestrator | | locked | False | 2025-10-09 11:07:53.166754 | orchestrator | | locked_reason | None | 2025-10-09 11:07:53.166765 | orchestrator | | name | test-2 | 2025-10-09 11:07:53.166777 | orchestrator | | pinned_availability_zone | None | 2025-10-09 11:07:53.166788 | orchestrator | | progress | 0 | 2025-10-09 11:07:53.166800 | orchestrator | | project_id | d4ec225126664242ae35543099df8628 | 2025-10-09 11:07:53.166811 | orchestrator | | properties | hostname='test-2' | 2025-10-09 11:07:53.166829 | orchestrator | | security_groups | name='ssh' | 2025-10-09 11:07:53.166848 | orchestrator | | | name='icmp' | 2025-10-09 11:07:53.166866 | orchestrator | | server_groups | None | 2025-10-09 11:07:53.166877 | orchestrator | | status | ACTIVE | 2025-10-09 11:07:53.166889 | orchestrator | | tags | test | 2025-10-09 11:07:53.166900 | orchestrator | | trusted_image_certificates | None | 2025-10-09 11:07:53.166912 | orchestrator | | updated | 2025-10-09T11:06:25Z | 2025-10-09 11:07:53.166923 | orchestrator | | user_id | 12e855cb161b4deb957f9af1a7a2bf97 | 2025-10-09 11:07:53.166935 | orchestrator | | volumes_attached | delete_on_termination='True', id='2eaed588-e04f-4ee9-b2aa-a44d14018bdf' | 2025-10-09 11:07:53.167669 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:07:53.462490 | orchestrator | + openstack --os-cloud test server show test-3 2025-10-09 11:07:56.591063 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:07:56.591190 | orchestrator | | Field | Value | 2025-10-09 11:07:56.591209 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:07:56.591222 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-10-09 11:07:56.591235 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-10-09 11:07:56.591266 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-10-09 11:07:56.591278 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-10-09 11:07:56.591290 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-10-09 11:07:56.591301 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-10-09 11:07:56.591330 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-10-09 11:07:56.591366 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-10-09 11:07:56.591379 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-10-09 11:07:56.591396 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-10-09 11:07:56.591408 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-10-09 11:07:56.591420 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-10-09 11:07:56.591431 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-10-09 11:07:56.591442 | orchestrator | | OS-EXT-STS:task_state | None | 2025-10-09 11:07:56.591453 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-10-09 11:07:56.591464 | orchestrator | | OS-SRV-USG:launched_at | 2025-10-09T11:05:12.000000 | 2025-10-09 11:07:56.591493 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-10-09 11:07:56.591505 | orchestrator | | accessIPv4 | | 2025-10-09 11:07:56.591517 | orchestrator | | accessIPv6 | | 2025-10-09 11:07:56.591533 | orchestrator | | addresses | auto_allocated_network=10.42.0.33, 192.168.112.163 | 2025-10-09 11:07:56.591544 | orchestrator | | config_drive | | 2025-10-09 11:07:56.591556 | orchestrator | | created | 2025-10-09T11:04:46Z | 2025-10-09 11:07:56.591567 | orchestrator | | description | None | 2025-10-09 11:07:56.591578 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-10-09 11:07:56.591642 | orchestrator | | hostId | 5d43f5574222daf8d4c2fd41e4c8aec93b82a62abd0bee188087da8d | 2025-10-09 11:07:56.591663 | orchestrator | | host_status | None | 2025-10-09 11:07:56.591682 | orchestrator | | id | cf1d5a4d-7509-412b-99ba-30d9c0cbc51c | 2025-10-09 11:07:56.591694 | orchestrator | | image | N/A (booted from volume) | 2025-10-09 11:07:56.591705 | orchestrator | | key_name | test | 2025-10-09 11:07:56.591722 | orchestrator | | locked | False | 2025-10-09 11:07:56.591734 | orchestrator | | locked_reason | None | 2025-10-09 11:07:56.591745 | orchestrator | | name | test-3 | 2025-10-09 11:07:56.591756 | orchestrator | | pinned_availability_zone | None | 2025-10-09 11:07:56.591767 | orchestrator | | progress | 0 | 2025-10-09 11:07:56.591786 | orchestrator | | project_id | d4ec225126664242ae35543099df8628 | 2025-10-09 11:07:56.591798 | orchestrator | | properties | hostname='test-3' | 2025-10-09 11:07:56.591817 | orchestrator | | security_groups | name='ssh' | 2025-10-09 11:07:56.591828 | orchestrator | | | name='icmp' | 2025-10-09 11:07:56.591840 | orchestrator | | server_groups | None | 2025-10-09 11:07:56.591856 | orchestrator | | status | ACTIVE | 2025-10-09 11:07:56.591868 | orchestrator | | tags | test | 2025-10-09 11:07:56.591879 | orchestrator | | trusted_image_certificates | None | 2025-10-09 11:07:56.591890 | orchestrator | | updated | 2025-10-09T11:06:30Z | 2025-10-09 11:07:56.591902 | orchestrator | | user_id | 12e855cb161b4deb957f9af1a7a2bf97 | 2025-10-09 11:07:56.591923 | orchestrator | | volumes_attached | delete_on_termination='True', id='a141846c-d320-4366-b344-68edc6559c29' | 2025-10-09 11:07:56.596041 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:07:56.896752 | orchestrator | + openstack --os-cloud test server show test-4 2025-10-09 11:07:59.848248 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:07:59.848336 | orchestrator | | Field | Value | 2025-10-09 11:07:59.848349 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:07:59.848370 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-10-09 11:07:59.848378 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-10-09 11:07:59.848386 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-10-09 11:07:59.848394 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-10-09 11:07:59.848419 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-10-09 11:07:59.848428 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-10-09 11:07:59.848450 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-10-09 11:07:59.848458 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-10-09 11:07:59.848466 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-10-09 11:07:59.848478 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-10-09 11:07:59.848486 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-10-09 11:07:59.848494 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-10-09 11:07:59.848502 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-10-09 11:07:59.848515 | orchestrator | | OS-EXT-STS:task_state | None | 2025-10-09 11:07:59.848523 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-10-09 11:07:59.848531 | orchestrator | | OS-SRV-USG:launched_at | 2025-10-09T11:05:56.000000 | 2025-10-09 11:07:59.848544 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-10-09 11:07:59.848552 | orchestrator | | accessIPv4 | | 2025-10-09 11:07:59.848560 | orchestrator | | accessIPv6 | | 2025-10-09 11:07:59.848567 | orchestrator | | addresses | auto_allocated_network=10.42.0.5, 192.168.112.182 | 2025-10-09 11:07:59.848575 | orchestrator | | config_drive | | 2025-10-09 11:07:59.848936 | orchestrator | | created | 2025-10-09T11:05:30Z | 2025-10-09 11:07:59.848956 | orchestrator | | description | None | 2025-10-09 11:07:59.848965 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-10-09 11:07:59.848975 | orchestrator | | hostId | 241f81ec0d7af74d7cbecef0d03f5875595ec70815c07c4bb00bda0d | 2025-10-09 11:07:59.848983 | orchestrator | | host_status | None | 2025-10-09 11:07:59.849002 | orchestrator | | id | b476cfa3-6ba7-408c-b421-541e2f64a37e | 2025-10-09 11:07:59.849012 | orchestrator | | image | N/A (booted from volume) | 2025-10-09 11:07:59.849020 | orchestrator | | key_name | test | 2025-10-09 11:07:59.849029 | orchestrator | | locked | False | 2025-10-09 11:07:59.849038 | orchestrator | | locked_reason | None | 2025-10-09 11:07:59.849053 | orchestrator | | name | test-4 | 2025-10-09 11:07:59.849062 | orchestrator | | pinned_availability_zone | None | 2025-10-09 11:07:59.849071 | orchestrator | | progress | 0 | 2025-10-09 11:07:59.849079 | orchestrator | | project_id | d4ec225126664242ae35543099df8628 | 2025-10-09 11:07:59.849088 | orchestrator | | properties | hostname='test-4' | 2025-10-09 11:07:59.849107 | orchestrator | | security_groups | name='ssh' | 2025-10-09 11:07:59.849116 | orchestrator | | | name='icmp' | 2025-10-09 11:07:59.849125 | orchestrator | | server_groups | None | 2025-10-09 11:07:59.849134 | orchestrator | | status | ACTIVE | 2025-10-09 11:07:59.849143 | orchestrator | | tags | test | 2025-10-09 11:07:59.849157 | orchestrator | | trusted_image_certificates | None | 2025-10-09 11:07:59.849166 | orchestrator | | updated | 2025-10-09T11:06:35Z | 2025-10-09 11:07:59.849175 | orchestrator | | user_id | 12e855cb161b4deb957f9af1a7a2bf97 | 2025-10-09 11:07:59.849184 | orchestrator | | volumes_attached | delete_on_termination='True', id='1ff83117-102a-402b-930f-c4352da31702' | 2025-10-09 11:07:59.852382 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-10-09 11:08:00.149351 | orchestrator | + server_ping 2025-10-09 11:08:00.151375 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-10-09 11:08:00.151401 | orchestrator | ++ tr -d '\r' 2025-10-09 11:08:03.221346 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:08:03.221449 | orchestrator | + ping -c3 192.168.112.109 2025-10-09 11:08:03.238544 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2025-10-09 11:08:03.238628 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=10.1 ms 2025-10-09 11:08:04.233042 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.49 ms 2025-10-09 11:08:05.234527 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=1.71 ms 2025-10-09 11:08:05.234784 | orchestrator | 2025-10-09 11:08:05.234804 | orchestrator | --- 192.168.112.109 ping statistics --- 2025-10-09 11:08:05.234817 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-10-09 11:08:05.234829 | orchestrator | rtt min/avg/max/mdev = 1.705/4.768/10.108/3.789 ms 2025-10-09 11:08:05.234853 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:08:05.234865 | orchestrator | + ping -c3 192.168.112.163 2025-10-09 11:08:05.245359 | orchestrator | PING 192.168.112.163 (192.168.112.163) 56(84) bytes of data. 2025-10-09 11:08:05.245386 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=1 ttl=63 time=6.50 ms 2025-10-09 11:08:06.243673 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=2 ttl=63 time=2.69 ms 2025-10-09 11:08:07.245713 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=3 ttl=63 time=1.97 ms 2025-10-09 11:08:07.245809 | orchestrator | 2025-10-09 11:08:07.245823 | orchestrator | --- 192.168.112.163 ping statistics --- 2025-10-09 11:08:07.245836 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-10-09 11:08:07.245848 | orchestrator | rtt min/avg/max/mdev = 1.972/3.720/6.504/1.989 ms 2025-10-09 11:08:07.246550 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:08:07.246575 | orchestrator | + ping -c3 192.168.112.182 2025-10-09 11:08:07.259112 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2025-10-09 11:08:07.259140 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=8.66 ms 2025-10-09 11:08:08.255707 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=3.03 ms 2025-10-09 11:08:09.255953 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=2.12 ms 2025-10-09 11:08:09.256772 | orchestrator | 2025-10-09 11:08:09.256842 | orchestrator | --- 192.168.112.182 ping statistics --- 2025-10-09 11:08:09.256855 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-09 11:08:09.256865 | orchestrator | rtt min/avg/max/mdev = 2.120/4.602/8.661/2.893 ms 2025-10-09 11:08:09.256890 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:08:09.256901 | orchestrator | + ping -c3 192.168.112.160 2025-10-09 11:08:09.268914 | orchestrator | PING 192.168.112.160 (192.168.112.160) 56(84) bytes of data. 2025-10-09 11:08:09.268956 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=1 ttl=63 time=8.93 ms 2025-10-09 11:08:10.263132 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=2 ttl=63 time=2.33 ms 2025-10-09 11:08:11.265530 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=3 ttl=63 time=2.04 ms 2025-10-09 11:08:11.265660 | orchestrator | 2025-10-09 11:08:11.265674 | orchestrator | --- 192.168.112.160 ping statistics --- 2025-10-09 11:08:11.265685 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-09 11:08:11.265694 | orchestrator | rtt min/avg/max/mdev = 2.036/4.429/8.925/3.180 ms 2025-10-09 11:08:11.265704 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:08:11.265714 | orchestrator | + ping -c3 192.168.112.100 2025-10-09 11:08:11.279678 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2025-10-09 11:08:11.279698 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=8.61 ms 2025-10-09 11:08:12.276135 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=2.83 ms 2025-10-09 11:08:13.276895 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=2.14 ms 2025-10-09 11:08:13.276992 | orchestrator | 2025-10-09 11:08:13.277007 | orchestrator | --- 192.168.112.100 ping statistics --- 2025-10-09 11:08:13.277020 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-09 11:08:13.277031 | orchestrator | rtt min/avg/max/mdev = 2.139/4.524/8.610/2.902 ms 2025-10-09 11:08:13.277695 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-10-09 11:08:13.277719 | orchestrator | + compute_list 2025-10-09 11:08:13.277732 | orchestrator | + osism manage compute list testbed-node-3 2025-10-09 11:08:16.736810 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-09 11:08:16.736930 | orchestrator | | ID | Name | Status | 2025-10-09 11:08:16.736946 | orchestrator | |--------------------------------------+--------+----------| 2025-10-09 11:08:16.736958 | orchestrator | | b476cfa3-6ba7-408c-b421-541e2f64a37e | test-4 | ACTIVE | 2025-10-09 11:08:16.736969 | orchestrator | | f09ee639-aaf0-49dd-842f-d8c066411fb7 | test-2 | ACTIVE | 2025-10-09 11:08:16.736980 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-09 11:08:17.094398 | orchestrator | + osism manage compute list testbed-node-4 2025-10-09 11:08:20.509096 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-09 11:08:20.509198 | orchestrator | | ID | Name | Status | 2025-10-09 11:08:20.509211 | orchestrator | |--------------------------------------+--------+----------| 2025-10-09 11:08:20.509223 | orchestrator | | c4a5f8ca-870c-4e38-a0b0-e315217fea45 | test-1 | ACTIVE | 2025-10-09 11:08:20.509258 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-09 11:08:20.893563 | orchestrator | + osism manage compute list testbed-node-5 2025-10-09 11:08:24.526996 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-09 11:08:24.527110 | orchestrator | | ID | Name | Status | 2025-10-09 11:08:24.527126 | orchestrator | |--------------------------------------+--------+----------| 2025-10-09 11:08:24.527138 | orchestrator | | cf1d5a4d-7509-412b-99ba-30d9c0cbc51c | test-3 | ACTIVE | 2025-10-09 11:08:24.527150 | orchestrator | | dacfd5ec-a5e5-4bf1-8601-753b43c4d777 | test | ACTIVE | 2025-10-09 11:08:24.527161 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-09 11:08:24.866421 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-10-09 11:08:27.991320 | orchestrator | 2025-10-09 11:08:27 | INFO  | Live migrating server c4a5f8ca-870c-4e38-a0b0-e315217fea45 2025-10-09 11:08:41.176417 | orchestrator | 2025-10-09 11:08:41 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:08:43.583640 | orchestrator | 2025-10-09 11:08:43 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:08:45.981661 | orchestrator | 2025-10-09 11:08:45 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:08:48.507704 | orchestrator | 2025-10-09 11:08:48 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:08:50.903073 | orchestrator | 2025-10-09 11:08:50 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:08:53.180943 | orchestrator | 2025-10-09 11:08:53 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:08:55.463154 | orchestrator | 2025-10-09 11:08:55 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:08:57.705856 | orchestrator | 2025-10-09 11:08:57 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:09:00.016028 | orchestrator | 2025-10-09 11:09:00 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:09:02.361467 | orchestrator | 2025-10-09 11:09:02 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) completed with status ACTIVE 2025-10-09 11:09:02.725384 | orchestrator | + compute_list 2025-10-09 11:09:02.725476 | orchestrator | + osism manage compute list testbed-node-3 2025-10-09 11:09:06.166289 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-09 11:09:06.166401 | orchestrator | | ID | Name | Status | 2025-10-09 11:09:06.166416 | orchestrator | |--------------------------------------+--------+----------| 2025-10-09 11:09:06.166428 | orchestrator | | b476cfa3-6ba7-408c-b421-541e2f64a37e | test-4 | ACTIVE | 2025-10-09 11:09:06.166439 | orchestrator | | f09ee639-aaf0-49dd-842f-d8c066411fb7 | test-2 | ACTIVE | 2025-10-09 11:09:06.166451 | orchestrator | | c4a5f8ca-870c-4e38-a0b0-e315217fea45 | test-1 | ACTIVE | 2025-10-09 11:09:06.166462 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-09 11:09:06.538877 | orchestrator | + osism manage compute list testbed-node-4 2025-10-09 11:09:09.426349 | orchestrator | +------+--------+----------+ 2025-10-09 11:09:09.426462 | orchestrator | | ID | Name | Status | 2025-10-09 11:09:09.426478 | orchestrator | |------+--------+----------| 2025-10-09 11:09:09.426491 | orchestrator | +------+--------+----------+ 2025-10-09 11:09:09.763943 | orchestrator | + osism manage compute list testbed-node-5 2025-10-09 11:09:13.095907 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-09 11:09:13.096009 | orchestrator | | ID | Name | Status | 2025-10-09 11:09:13.096025 | orchestrator | |--------------------------------------+--------+----------| 2025-10-09 11:09:13.096037 | orchestrator | | cf1d5a4d-7509-412b-99ba-30d9c0cbc51c | test-3 | ACTIVE | 2025-10-09 11:09:13.096073 | orchestrator | | dacfd5ec-a5e5-4bf1-8601-753b43c4d777 | test | ACTIVE | 2025-10-09 11:09:13.096085 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-09 11:09:13.476298 | orchestrator | + server_ping 2025-10-09 11:09:13.477074 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-10-09 11:09:13.477115 | orchestrator | ++ tr -d '\r' 2025-10-09 11:09:16.567208 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:09:16.567313 | orchestrator | + ping -c3 192.168.112.109 2025-10-09 11:09:16.577940 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2025-10-09 11:09:16.577966 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=7.55 ms 2025-10-09 11:09:17.574504 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=3.05 ms 2025-10-09 11:09:18.574737 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=1.92 ms 2025-10-09 11:09:18.574839 | orchestrator | 2025-10-09 11:09:18.574856 | orchestrator | --- 192.168.112.109 ping statistics --- 2025-10-09 11:09:18.574869 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-10-09 11:09:18.574881 | orchestrator | rtt min/avg/max/mdev = 1.921/4.174/7.552/2.432 ms 2025-10-09 11:09:18.575383 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:09:18.575406 | orchestrator | + ping -c3 192.168.112.163 2025-10-09 11:09:18.588671 | orchestrator | PING 192.168.112.163 (192.168.112.163) 56(84) bytes of data. 2025-10-09 11:09:18.588726 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=1 ttl=63 time=7.12 ms 2025-10-09 11:09:19.585967 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=2 ttl=63 time=2.70 ms 2025-10-09 11:09:20.587882 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=3 ttl=63 time=2.11 ms 2025-10-09 11:09:20.587974 | orchestrator | 2025-10-09 11:09:20.587989 | orchestrator | --- 192.168.112.163 ping statistics --- 2025-10-09 11:09:20.588001 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-10-09 11:09:20.588013 | orchestrator | rtt min/avg/max/mdev = 2.110/3.975/7.122/2.237 ms 2025-10-09 11:09:20.588036 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:09:20.588049 | orchestrator | + ping -c3 192.168.112.182 2025-10-09 11:09:20.599854 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2025-10-09 11:09:20.599881 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=7.78 ms 2025-10-09 11:09:21.597466 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=2.89 ms 2025-10-09 11:09:22.598379 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=2.23 ms 2025-10-09 11:09:22.598473 | orchestrator | 2025-10-09 11:09:22.598489 | orchestrator | --- 192.168.112.182 ping statistics --- 2025-10-09 11:09:22.598519 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-09 11:09:22.598532 | orchestrator | rtt min/avg/max/mdev = 2.225/4.297/7.775/2.474 ms 2025-10-09 11:09:22.598555 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:09:22.598568 | orchestrator | + ping -c3 192.168.112.160 2025-10-09 11:09:22.610809 | orchestrator | PING 192.168.112.160 (192.168.112.160) 56(84) bytes of data. 2025-10-09 11:09:22.610878 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=1 ttl=63 time=7.59 ms 2025-10-09 11:09:23.607940 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=2 ttl=63 time=2.76 ms 2025-10-09 11:09:24.609449 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=3 ttl=63 time=2.71 ms 2025-10-09 11:09:24.609549 | orchestrator | 2025-10-09 11:09:24.609565 | orchestrator | --- 192.168.112.160 ping statistics --- 2025-10-09 11:09:24.609578 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-09 11:09:24.609590 | orchestrator | rtt min/avg/max/mdev = 2.711/4.351/7.585/2.286 ms 2025-10-09 11:09:24.609602 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:09:24.609614 | orchestrator | + ping -c3 192.168.112.100 2025-10-09 11:09:24.623826 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2025-10-09 11:09:24.623900 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=9.08 ms 2025-10-09 11:09:25.618879 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=2.54 ms 2025-10-09 11:09:26.621924 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=2.47 ms 2025-10-09 11:09:26.622080 | orchestrator | 2025-10-09 11:09:26.622099 | orchestrator | --- 192.168.112.100 ping statistics --- 2025-10-09 11:09:26.622112 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-09 11:09:26.622124 | orchestrator | rtt min/avg/max/mdev = 2.471/4.696/9.077/3.097 ms 2025-10-09 11:09:26.622135 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-10-09 11:09:29.864049 | orchestrator | 2025-10-09 11:09:29 | INFO  | Live migrating server cf1d5a4d-7509-412b-99ba-30d9c0cbc51c 2025-10-09 11:09:42.808127 | orchestrator | 2025-10-09 11:09:42 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:09:45.138649 | orchestrator | 2025-10-09 11:09:45 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:09:47.462343 | orchestrator | 2025-10-09 11:09:47 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:09:49.874871 | orchestrator | 2025-10-09 11:09:49 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:09:52.171068 | orchestrator | 2025-10-09 11:09:52 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:09:54.445661 | orchestrator | 2025-10-09 11:09:54 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:09:56.774202 | orchestrator | 2025-10-09 11:09:56 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:09:59.079186 | orchestrator | 2025-10-09 11:09:59 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:10:01.387359 | orchestrator | 2025-10-09 11:10:01 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) completed with status ACTIVE 2025-10-09 11:10:01.387482 | orchestrator | 2025-10-09 11:10:01 | INFO  | Live migrating server dacfd5ec-a5e5-4bf1-8601-753b43c4d777 2025-10-09 11:10:11.885799 | orchestrator | 2025-10-09 11:10:11 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:10:14.253508 | orchestrator | 2025-10-09 11:10:14 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:10:16.607877 | orchestrator | 2025-10-09 11:10:16 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:10:18.928025 | orchestrator | 2025-10-09 11:10:18 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:10:21.239294 | orchestrator | 2025-10-09 11:10:21 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:10:23.531905 | orchestrator | 2025-10-09 11:10:23 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:10:25.878876 | orchestrator | 2025-10-09 11:10:25 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:10:28.210077 | orchestrator | 2025-10-09 11:10:28 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:10:30.492917 | orchestrator | 2025-10-09 11:10:30 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:10:32.863844 | orchestrator | 2025-10-09 11:10:32 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:10:35.242777 | orchestrator | 2025-10-09 11:10:35 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) completed with status ACTIVE 2025-10-09 11:10:35.647664 | orchestrator | + compute_list 2025-10-09 11:10:35.647793 | orchestrator | + osism manage compute list testbed-node-3 2025-10-09 11:10:38.882954 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-09 11:10:38.883062 | orchestrator | | ID | Name | Status | 2025-10-09 11:10:38.883077 | orchestrator | |--------------------------------------+--------+----------| 2025-10-09 11:10:38.883089 | orchestrator | | b476cfa3-6ba7-408c-b421-541e2f64a37e | test-4 | ACTIVE | 2025-10-09 11:10:38.883100 | orchestrator | | cf1d5a4d-7509-412b-99ba-30d9c0cbc51c | test-3 | ACTIVE | 2025-10-09 11:10:38.883112 | orchestrator | | f09ee639-aaf0-49dd-842f-d8c066411fb7 | test-2 | ACTIVE | 2025-10-09 11:10:38.883123 | orchestrator | | c4a5f8ca-870c-4e38-a0b0-e315217fea45 | test-1 | ACTIVE | 2025-10-09 11:10:38.883134 | orchestrator | | dacfd5ec-a5e5-4bf1-8601-753b43c4d777 | test | ACTIVE | 2025-10-09 11:10:38.883146 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-09 11:10:39.274269 | orchestrator | + osism manage compute list testbed-node-4 2025-10-09 11:10:42.228938 | orchestrator | +------+--------+----------+ 2025-10-09 11:10:42.229037 | orchestrator | | ID | Name | Status | 2025-10-09 11:10:42.229052 | orchestrator | |------+--------+----------| 2025-10-09 11:10:42.229064 | orchestrator | +------+--------+----------+ 2025-10-09 11:10:42.684079 | orchestrator | + osism manage compute list testbed-node-5 2025-10-09 11:10:45.621317 | orchestrator | +------+--------+----------+ 2025-10-09 11:10:45.621432 | orchestrator | | ID | Name | Status | 2025-10-09 11:10:45.621450 | orchestrator | |------+--------+----------| 2025-10-09 11:10:45.621462 | orchestrator | +------+--------+----------+ 2025-10-09 11:10:46.206407 | orchestrator | + server_ping 2025-10-09 11:10:46.207891 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-10-09 11:10:46.208051 | orchestrator | ++ tr -d '\r' 2025-10-09 11:10:49.816803 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:10:49.816910 | orchestrator | + ping -c3 192.168.112.109 2025-10-09 11:10:49.826674 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2025-10-09 11:10:49.826699 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=7.25 ms 2025-10-09 11:10:50.824776 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.89 ms 2025-10-09 11:10:51.825687 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=1.89 ms 2025-10-09 11:10:51.825808 | orchestrator | 2025-10-09 11:10:51.825825 | orchestrator | --- 192.168.112.109 ping statistics --- 2025-10-09 11:10:51.825840 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-09 11:10:51.825858 | orchestrator | rtt min/avg/max/mdev = 1.888/4.009/7.247/2.325 ms 2025-10-09 11:10:51.826081 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:10:51.826105 | orchestrator | + ping -c3 192.168.112.163 2025-10-09 11:10:51.834904 | orchestrator | PING 192.168.112.163 (192.168.112.163) 56(84) bytes of data. 2025-10-09 11:10:51.834937 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=1 ttl=63 time=6.50 ms 2025-10-09 11:10:52.833426 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=2 ttl=63 time=2.96 ms 2025-10-09 11:10:53.834672 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=3 ttl=63 time=1.99 ms 2025-10-09 11:10:53.834822 | orchestrator | 2025-10-09 11:10:53.834838 | orchestrator | --- 192.168.112.163 ping statistics --- 2025-10-09 11:10:53.834852 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-09 11:10:53.834864 | orchestrator | rtt min/avg/max/mdev = 1.985/3.814/6.502/1.941 ms 2025-10-09 11:10:53.835283 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:10:53.835308 | orchestrator | + ping -c3 192.168.112.182 2025-10-09 11:10:53.847947 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2025-10-09 11:10:53.847984 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=8.13 ms 2025-10-09 11:10:54.844204 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=2.73 ms 2025-10-09 11:10:55.847007 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=2.51 ms 2025-10-09 11:10:55.847112 | orchestrator | 2025-10-09 11:10:55.847128 | orchestrator | --- 192.168.112.182 ping statistics --- 2025-10-09 11:10:55.847140 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-10-09 11:10:55.847152 | orchestrator | rtt min/avg/max/mdev = 2.507/4.454/8.127/2.598 ms 2025-10-09 11:10:55.847433 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:10:55.848104 | orchestrator | + ping -c3 192.168.112.160 2025-10-09 11:10:55.860997 | orchestrator | PING 192.168.112.160 (192.168.112.160) 56(84) bytes of data. 2025-10-09 11:10:55.861040 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=1 ttl=63 time=8.82 ms 2025-10-09 11:10:56.855045 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=2 ttl=63 time=2.27 ms 2025-10-09 11:10:57.857075 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=3 ttl=63 time=2.04 ms 2025-10-09 11:10:57.857163 | orchestrator | 2025-10-09 11:10:57.857178 | orchestrator | --- 192.168.112.160 ping statistics --- 2025-10-09 11:10:57.857190 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-10-09 11:10:57.857201 | orchestrator | rtt min/avg/max/mdev = 2.042/4.379/8.822/3.142 ms 2025-10-09 11:10:57.857214 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:10:57.857226 | orchestrator | + ping -c3 192.168.112.100 2025-10-09 11:10:57.870512 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2025-10-09 11:10:57.870537 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=10.3 ms 2025-10-09 11:10:58.863719 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=2.76 ms 2025-10-09 11:10:59.865928 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=2.26 ms 2025-10-09 11:10:59.866062 | orchestrator | 2025-10-09 11:10:59.866076 | orchestrator | --- 192.168.112.100 ping statistics --- 2025-10-09 11:10:59.866086 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-09 11:10:59.866095 | orchestrator | rtt min/avg/max/mdev = 2.264/5.118/10.332/3.691 ms 2025-10-09 11:10:59.866403 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-10-09 11:11:03.285700 | orchestrator | 2025-10-09 11:11:03 | INFO  | Live migrating server b476cfa3-6ba7-408c-b421-541e2f64a37e 2025-10-09 11:11:14.327414 | orchestrator | 2025-10-09 11:11:14 | INFO  | Live migration of b476cfa3-6ba7-408c-b421-541e2f64a37e (test-4) is still in progress 2025-10-09 11:11:16.670189 | orchestrator | 2025-10-09 11:11:16 | INFO  | Live migration of b476cfa3-6ba7-408c-b421-541e2f64a37e (test-4) is still in progress 2025-10-09 11:11:19.026580 | orchestrator | 2025-10-09 11:11:19 | INFO  | Live migration of b476cfa3-6ba7-408c-b421-541e2f64a37e (test-4) is still in progress 2025-10-09 11:11:21.376812 | orchestrator | 2025-10-09 11:11:21 | INFO  | Live migration of b476cfa3-6ba7-408c-b421-541e2f64a37e (test-4) is still in progress 2025-10-09 11:11:23.676368 | orchestrator | 2025-10-09 11:11:23 | INFO  | Live migration of b476cfa3-6ba7-408c-b421-541e2f64a37e (test-4) is still in progress 2025-10-09 11:11:25.967435 | orchestrator | 2025-10-09 11:11:25 | INFO  | Live migration of b476cfa3-6ba7-408c-b421-541e2f64a37e (test-4) is still in progress 2025-10-09 11:11:28.237056 | orchestrator | 2025-10-09 11:11:28 | INFO  | Live migration of b476cfa3-6ba7-408c-b421-541e2f64a37e (test-4) is still in progress 2025-10-09 11:11:30.510566 | orchestrator | 2025-10-09 11:11:30 | INFO  | Live migration of b476cfa3-6ba7-408c-b421-541e2f64a37e (test-4) is still in progress 2025-10-09 11:11:32.824210 | orchestrator | 2025-10-09 11:11:32 | INFO  | Live migration of b476cfa3-6ba7-408c-b421-541e2f64a37e (test-4) completed with status ACTIVE 2025-10-09 11:11:32.824305 | orchestrator | 2025-10-09 11:11:32 | INFO  | Live migrating server cf1d5a4d-7509-412b-99ba-30d9c0cbc51c 2025-10-09 11:11:43.815840 | orchestrator | 2025-10-09 11:11:43 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:11:46.212495 | orchestrator | 2025-10-09 11:11:46 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:11:48.514338 | orchestrator | 2025-10-09 11:11:48 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:11:50.830951 | orchestrator | 2025-10-09 11:11:50 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:11:53.090544 | orchestrator | 2025-10-09 11:11:53 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:11:55.335050 | orchestrator | 2025-10-09 11:11:55 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:11:57.631822 | orchestrator | 2025-10-09 11:11:57 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:11:59.982634 | orchestrator | 2025-10-09 11:11:59 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:12:02.247348 | orchestrator | 2025-10-09 11:12:02 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) completed with status ACTIVE 2025-10-09 11:12:02.247456 | orchestrator | 2025-10-09 11:12:02 | INFO  | Live migrating server f09ee639-aaf0-49dd-842f-d8c066411fb7 2025-10-09 11:12:14.795435 | orchestrator | 2025-10-09 11:12:14 | INFO  | Live migration of f09ee639-aaf0-49dd-842f-d8c066411fb7 (test-2) is still in progress 2025-10-09 11:12:17.118533 | orchestrator | 2025-10-09 11:12:17 | INFO  | Live migration of f09ee639-aaf0-49dd-842f-d8c066411fb7 (test-2) is still in progress 2025-10-09 11:12:19.524704 | orchestrator | 2025-10-09 11:12:19 | INFO  | Live migration of f09ee639-aaf0-49dd-842f-d8c066411fb7 (test-2) is still in progress 2025-10-09 11:12:21.849423 | orchestrator | 2025-10-09 11:12:21 | INFO  | Live migration of f09ee639-aaf0-49dd-842f-d8c066411fb7 (test-2) is still in progress 2025-10-09 11:12:24.140176 | orchestrator | 2025-10-09 11:12:24 | INFO  | Live migration of f09ee639-aaf0-49dd-842f-d8c066411fb7 (test-2) is still in progress 2025-10-09 11:12:26.409639 | orchestrator | 2025-10-09 11:12:26 | INFO  | Live migration of f09ee639-aaf0-49dd-842f-d8c066411fb7 (test-2) is still in progress 2025-10-09 11:12:28.741234 | orchestrator | 2025-10-09 11:12:28 | INFO  | Live migration of f09ee639-aaf0-49dd-842f-d8c066411fb7 (test-2) is still in progress 2025-10-09 11:12:31.019199 | orchestrator | 2025-10-09 11:12:31 | INFO  | Live migration of f09ee639-aaf0-49dd-842f-d8c066411fb7 (test-2) is still in progress 2025-10-09 11:12:33.318730 | orchestrator | 2025-10-09 11:12:33 | INFO  | Live migration of f09ee639-aaf0-49dd-842f-d8c066411fb7 (test-2) completed with status ACTIVE 2025-10-09 11:12:33.318884 | orchestrator | 2025-10-09 11:12:33 | INFO  | Live migrating server c4a5f8ca-870c-4e38-a0b0-e315217fea45 2025-10-09 11:12:43.533538 | orchestrator | 2025-10-09 11:12:43 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:12:45.880976 | orchestrator | 2025-10-09 11:12:45 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:12:48.179700 | orchestrator | 2025-10-09 11:12:48 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:12:50.489639 | orchestrator | 2025-10-09 11:12:50 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:12:53.003838 | orchestrator | 2025-10-09 11:12:53 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:12:55.235064 | orchestrator | 2025-10-09 11:12:55 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:12:57.544050 | orchestrator | 2025-10-09 11:12:57 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:12:59.902681 | orchestrator | 2025-10-09 11:12:59 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:13:02.165335 | orchestrator | 2025-10-09 11:13:02 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) completed with status ACTIVE 2025-10-09 11:13:02.165437 | orchestrator | 2025-10-09 11:13:02 | INFO  | Live migrating server dacfd5ec-a5e5-4bf1-8601-753b43c4d777 2025-10-09 11:13:12.924293 | orchestrator | 2025-10-09 11:13:12 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:13:15.252146 | orchestrator | 2025-10-09 11:13:15 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:13:17.619535 | orchestrator | 2025-10-09 11:13:17 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:13:20.016406 | orchestrator | 2025-10-09 11:13:20 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:13:22.543413 | orchestrator | 2025-10-09 11:13:22 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:13:24.841255 | orchestrator | 2025-10-09 11:13:24 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:13:27.087488 | orchestrator | 2025-10-09 11:13:27 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:13:29.382742 | orchestrator | 2025-10-09 11:13:29 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:13:31.720146 | orchestrator | 2025-10-09 11:13:31 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:13:33.973578 | orchestrator | 2025-10-09 11:13:33 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:13:36.474271 | orchestrator | 2025-10-09 11:13:36 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) completed with status ACTIVE 2025-10-09 11:13:36.844569 | orchestrator | + compute_list 2025-10-09 11:13:36.844665 | orchestrator | + osism manage compute list testbed-node-3 2025-10-09 11:13:39.868669 | orchestrator | +------+--------+----------+ 2025-10-09 11:13:39.868780 | orchestrator | | ID | Name | Status | 2025-10-09 11:13:39.868796 | orchestrator | |------+--------+----------| 2025-10-09 11:13:39.868809 | orchestrator | +------+--------+----------+ 2025-10-09 11:13:40.248615 | orchestrator | + osism manage compute list testbed-node-4 2025-10-09 11:13:43.549557 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-09 11:13:43.549678 | orchestrator | | ID | Name | Status | 2025-10-09 11:13:43.549692 | orchestrator | |--------------------------------------+--------+----------| 2025-10-09 11:13:43.549704 | orchestrator | | b476cfa3-6ba7-408c-b421-541e2f64a37e | test-4 | ACTIVE | 2025-10-09 11:13:43.549715 | orchestrator | | cf1d5a4d-7509-412b-99ba-30d9c0cbc51c | test-3 | ACTIVE | 2025-10-09 11:13:43.549726 | orchestrator | | f09ee639-aaf0-49dd-842f-d8c066411fb7 | test-2 | ACTIVE | 2025-10-09 11:13:43.549737 | orchestrator | | c4a5f8ca-870c-4e38-a0b0-e315217fea45 | test-1 | ACTIVE | 2025-10-09 11:13:43.549748 | orchestrator | | dacfd5ec-a5e5-4bf1-8601-753b43c4d777 | test | ACTIVE | 2025-10-09 11:13:43.549759 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-09 11:13:43.925069 | orchestrator | + osism manage compute list testbed-node-5 2025-10-09 11:13:46.946346 | orchestrator | +------+--------+----------+ 2025-10-09 11:13:46.946448 | orchestrator | | ID | Name | Status | 2025-10-09 11:13:46.946463 | orchestrator | |------+--------+----------| 2025-10-09 11:13:46.946475 | orchestrator | +------+--------+----------+ 2025-10-09 11:13:47.305555 | orchestrator | + server_ping 2025-10-09 11:13:47.306105 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-10-09 11:13:47.306128 | orchestrator | ++ tr -d '\r' 2025-10-09 11:13:50.573854 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:13:50.573962 | orchestrator | + ping -c3 192.168.112.109 2025-10-09 11:13:50.588037 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2025-10-09 11:13:50.588068 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=11.5 ms 2025-10-09 11:13:51.581284 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.41 ms 2025-10-09 11:13:52.583633 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=2.18 ms 2025-10-09 11:13:52.583735 | orchestrator | 2025-10-09 11:13:52.583751 | orchestrator | --- 192.168.112.109 ping statistics --- 2025-10-09 11:13:52.583764 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-09 11:13:52.583776 | orchestrator | rtt min/avg/max/mdev = 2.175/5.353/11.470/4.326 ms 2025-10-09 11:13:52.583787 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:13:52.583800 | orchestrator | + ping -c3 192.168.112.163 2025-10-09 11:13:52.593637 | orchestrator | PING 192.168.112.163 (192.168.112.163) 56(84) bytes of data. 2025-10-09 11:13:52.593664 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=1 ttl=63 time=6.88 ms 2025-10-09 11:13:53.590883 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=2 ttl=63 time=2.60 ms 2025-10-09 11:13:54.593121 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=3 ttl=63 time=2.30 ms 2025-10-09 11:13:54.593227 | orchestrator | 2025-10-09 11:13:54.593243 | orchestrator | --- 192.168.112.163 ping statistics --- 2025-10-09 11:13:54.593256 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-09 11:13:54.593268 | orchestrator | rtt min/avg/max/mdev = 2.301/3.927/6.880/2.091 ms 2025-10-09 11:13:54.593280 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:13:54.593292 | orchestrator | + ping -c3 192.168.112.182 2025-10-09 11:13:54.607556 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2025-10-09 11:13:54.607644 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=10.2 ms 2025-10-09 11:13:55.601591 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=2.51 ms 2025-10-09 11:13:56.602938 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=2.01 ms 2025-10-09 11:13:56.603035 | orchestrator | 2025-10-09 11:13:56.603051 | orchestrator | --- 192.168.112.182 ping statistics --- 2025-10-09 11:13:56.603065 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-10-09 11:13:56.603076 | orchestrator | rtt min/avg/max/mdev = 2.008/4.892/10.163/3.732 ms 2025-10-09 11:13:56.603646 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:13:56.603670 | orchestrator | + ping -c3 192.168.112.160 2025-10-09 11:13:56.614559 | orchestrator | PING 192.168.112.160 (192.168.112.160) 56(84) bytes of data. 2025-10-09 11:13:56.614582 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=1 ttl=63 time=6.68 ms 2025-10-09 11:13:57.612495 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=2 ttl=63 time=2.31 ms 2025-10-09 11:13:58.614374 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=3 ttl=63 time=1.90 ms 2025-10-09 11:13:58.614462 | orchestrator | 2025-10-09 11:13:58.614476 | orchestrator | --- 192.168.112.160 ping statistics --- 2025-10-09 11:13:58.614489 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-10-09 11:13:58.614501 | orchestrator | rtt min/avg/max/mdev = 1.901/3.631/6.682/2.163 ms 2025-10-09 11:13:58.614513 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:13:58.614551 | orchestrator | + ping -c3 192.168.112.100 2025-10-09 11:13:58.628371 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2025-10-09 11:13:58.628395 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=9.37 ms 2025-10-09 11:13:59.623392 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=2.63 ms 2025-10-09 11:14:00.624890 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=1.77 ms 2025-10-09 11:14:00.624983 | orchestrator | 2025-10-09 11:14:00.624998 | orchestrator | --- 192.168.112.100 ping statistics --- 2025-10-09 11:14:00.625011 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-10-09 11:14:00.625023 | orchestrator | rtt min/avg/max/mdev = 1.769/4.589/9.371/3.399 ms 2025-10-09 11:14:00.625283 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-10-09 11:14:03.946900 | orchestrator | 2025-10-09 11:14:03 | INFO  | Live migrating server b476cfa3-6ba7-408c-b421-541e2f64a37e 2025-10-09 11:14:13.724591 | orchestrator | 2025-10-09 11:14:13 | INFO  | Live migration of b476cfa3-6ba7-408c-b421-541e2f64a37e (test-4) is still in progress 2025-10-09 11:14:16.034198 | orchestrator | 2025-10-09 11:14:16 | INFO  | Live migration of b476cfa3-6ba7-408c-b421-541e2f64a37e (test-4) is still in progress 2025-10-09 11:14:18.406116 | orchestrator | 2025-10-09 11:14:18 | INFO  | Live migration of b476cfa3-6ba7-408c-b421-541e2f64a37e (test-4) is still in progress 2025-10-09 11:14:20.776983 | orchestrator | 2025-10-09 11:14:20 | INFO  | Live migration of b476cfa3-6ba7-408c-b421-541e2f64a37e (test-4) is still in progress 2025-10-09 11:14:23.071270 | orchestrator | 2025-10-09 11:14:23 | INFO  | Live migration of b476cfa3-6ba7-408c-b421-541e2f64a37e (test-4) is still in progress 2025-10-09 11:14:25.329917 | orchestrator | 2025-10-09 11:14:25 | INFO  | Live migration of b476cfa3-6ba7-408c-b421-541e2f64a37e (test-4) is still in progress 2025-10-09 11:14:27.596942 | orchestrator | 2025-10-09 11:14:27 | INFO  | Live migration of b476cfa3-6ba7-408c-b421-541e2f64a37e (test-4) is still in progress 2025-10-09 11:14:29.893990 | orchestrator | 2025-10-09 11:14:29 | INFO  | Live migration of b476cfa3-6ba7-408c-b421-541e2f64a37e (test-4) is still in progress 2025-10-09 11:14:32.361546 | orchestrator | 2025-10-09 11:14:32 | INFO  | Live migration of b476cfa3-6ba7-408c-b421-541e2f64a37e (test-4) completed with status ACTIVE 2025-10-09 11:14:32.361655 | orchestrator | 2025-10-09 11:14:32 | INFO  | Live migrating server cf1d5a4d-7509-412b-99ba-30d9c0cbc51c 2025-10-09 11:14:43.058998 | orchestrator | 2025-10-09 11:14:43 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:14:45.401609 | orchestrator | 2025-10-09 11:14:45 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:14:47.722952 | orchestrator | 2025-10-09 11:14:47 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:14:50.096399 | orchestrator | 2025-10-09 11:14:50 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:14:52.391836 | orchestrator | 2025-10-09 11:14:52 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:14:54.740565 | orchestrator | 2025-10-09 11:14:54 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:14:57.018953 | orchestrator | 2025-10-09 11:14:57 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:14:59.307232 | orchestrator | 2025-10-09 11:14:59 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) is still in progress 2025-10-09 11:15:01.623963 | orchestrator | 2025-10-09 11:15:01 | INFO  | Live migration of cf1d5a4d-7509-412b-99ba-30d9c0cbc51c (test-3) completed with status ACTIVE 2025-10-09 11:15:01.624095 | orchestrator | 2025-10-09 11:15:01 | INFO  | Live migrating server f09ee639-aaf0-49dd-842f-d8c066411fb7 2025-10-09 11:15:11.223425 | orchestrator | 2025-10-09 11:15:11 | INFO  | Live migration of f09ee639-aaf0-49dd-842f-d8c066411fb7 (test-2) is still in progress 2025-10-09 11:15:13.538499 | orchestrator | 2025-10-09 11:15:13 | INFO  | Live migration of f09ee639-aaf0-49dd-842f-d8c066411fb7 (test-2) is still in progress 2025-10-09 11:15:15.866173 | orchestrator | 2025-10-09 11:15:15 | INFO  | Live migration of f09ee639-aaf0-49dd-842f-d8c066411fb7 (test-2) is still in progress 2025-10-09 11:15:18.167637 | orchestrator | 2025-10-09 11:15:18 | INFO  | Live migration of f09ee639-aaf0-49dd-842f-d8c066411fb7 (test-2) is still in progress 2025-10-09 11:15:20.572625 | orchestrator | 2025-10-09 11:15:20 | INFO  | Live migration of f09ee639-aaf0-49dd-842f-d8c066411fb7 (test-2) is still in progress 2025-10-09 11:15:22.847797 | orchestrator | 2025-10-09 11:15:22 | INFO  | Live migration of f09ee639-aaf0-49dd-842f-d8c066411fb7 (test-2) is still in progress 2025-10-09 11:15:25.126467 | orchestrator | 2025-10-09 11:15:25 | INFO  | Live migration of f09ee639-aaf0-49dd-842f-d8c066411fb7 (test-2) is still in progress 2025-10-09 11:15:27.409418 | orchestrator | 2025-10-09 11:15:27 | INFO  | Live migration of f09ee639-aaf0-49dd-842f-d8c066411fb7 (test-2) is still in progress 2025-10-09 11:15:29.689786 | orchestrator | 2025-10-09 11:15:29 | INFO  | Live migration of f09ee639-aaf0-49dd-842f-d8c066411fb7 (test-2) completed with status ACTIVE 2025-10-09 11:15:29.691171 | orchestrator | 2025-10-09 11:15:29 | INFO  | Live migrating server c4a5f8ca-870c-4e38-a0b0-e315217fea45 2025-10-09 11:15:39.403028 | orchestrator | 2025-10-09 11:15:39 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:15:41.737810 | orchestrator | 2025-10-09 11:15:41 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:15:44.043209 | orchestrator | 2025-10-09 11:15:44 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:15:46.361425 | orchestrator | 2025-10-09 11:15:46 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:15:48.607342 | orchestrator | 2025-10-09 11:15:48 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:15:50.882547 | orchestrator | 2025-10-09 11:15:50 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:15:53.127700 | orchestrator | 2025-10-09 11:15:53 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:15:55.407686 | orchestrator | 2025-10-09 11:15:55 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:15:57.699835 | orchestrator | 2025-10-09 11:15:57 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) is still in progress 2025-10-09 11:16:00.044477 | orchestrator | 2025-10-09 11:16:00 | INFO  | Live migration of c4a5f8ca-870c-4e38-a0b0-e315217fea45 (test-1) completed with status ACTIVE 2025-10-09 11:16:00.044588 | orchestrator | 2025-10-09 11:16:00 | INFO  | Live migrating server dacfd5ec-a5e5-4bf1-8601-753b43c4d777 2025-10-09 11:16:10.292753 | orchestrator | 2025-10-09 11:16:10 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:16:12.673742 | orchestrator | 2025-10-09 11:16:12 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:16:15.034200 | orchestrator | 2025-10-09 11:16:15 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:16:17.330624 | orchestrator | 2025-10-09 11:16:17 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:16:19.616377 | orchestrator | 2025-10-09 11:16:19 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:16:21.901426 | orchestrator | 2025-10-09 11:16:21 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:16:24.215536 | orchestrator | 2025-10-09 11:16:24 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:16:26.504785 | orchestrator | 2025-10-09 11:16:26 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:16:28.790336 | orchestrator | 2025-10-09 11:16:28 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) is still in progress 2025-10-09 11:16:31.067973 | orchestrator | 2025-10-09 11:16:31 | INFO  | Live migration of dacfd5ec-a5e5-4bf1-8601-753b43c4d777 (test) completed with status ACTIVE 2025-10-09 11:16:31.426639 | orchestrator | + compute_list 2025-10-09 11:16:31.426739 | orchestrator | + osism manage compute list testbed-node-3 2025-10-09 11:16:34.241449 | orchestrator | +------+--------+----------+ 2025-10-09 11:16:34.241550 | orchestrator | | ID | Name | Status | 2025-10-09 11:16:34.241565 | orchestrator | |------+--------+----------| 2025-10-09 11:16:34.241577 | orchestrator | +------+--------+----------+ 2025-10-09 11:16:34.591238 | orchestrator | + osism manage compute list testbed-node-4 2025-10-09 11:16:37.466324 | orchestrator | +------+--------+----------+ 2025-10-09 11:16:37.466440 | orchestrator | | ID | Name | Status | 2025-10-09 11:16:37.466457 | orchestrator | |------+--------+----------| 2025-10-09 11:16:37.466469 | orchestrator | +------+--------+----------+ 2025-10-09 11:16:37.868657 | orchestrator | + osism manage compute list testbed-node-5 2025-10-09 11:16:41.121196 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-09 11:16:41.121299 | orchestrator | | ID | Name | Status | 2025-10-09 11:16:41.121313 | orchestrator | |--------------------------------------+--------+----------| 2025-10-09 11:16:41.121325 | orchestrator | | b476cfa3-6ba7-408c-b421-541e2f64a37e | test-4 | ACTIVE | 2025-10-09 11:16:41.121336 | orchestrator | | cf1d5a4d-7509-412b-99ba-30d9c0cbc51c | test-3 | ACTIVE | 2025-10-09 11:16:41.121347 | orchestrator | | f09ee639-aaf0-49dd-842f-d8c066411fb7 | test-2 | ACTIVE | 2025-10-09 11:16:41.121358 | orchestrator | | c4a5f8ca-870c-4e38-a0b0-e315217fea45 | test-1 | ACTIVE | 2025-10-09 11:16:41.121369 | orchestrator | | dacfd5ec-a5e5-4bf1-8601-753b43c4d777 | test | ACTIVE | 2025-10-09 11:16:41.121380 | orchestrator | +--------------------------------------+--------+----------+ 2025-10-09 11:16:41.475242 | orchestrator | + server_ping 2025-10-09 11:16:41.476376 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-10-09 11:16:41.477357 | orchestrator | ++ tr -d '\r' 2025-10-09 11:16:44.387528 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:16:44.387637 | orchestrator | + ping -c3 192.168.112.109 2025-10-09 11:16:44.395329 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2025-10-09 11:16:44.395355 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=6.20 ms 2025-10-09 11:16:45.393007 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.18 ms 2025-10-09 11:16:46.395170 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=1.94 ms 2025-10-09 11:16:46.395305 | orchestrator | 2025-10-09 11:16:46.395321 | orchestrator | --- 192.168.112.109 ping statistics --- 2025-10-09 11:16:46.395333 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-09 11:16:46.395343 | orchestrator | rtt min/avg/max/mdev = 1.940/3.438/6.196/1.952 ms 2025-10-09 11:16:46.395426 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:16:46.395467 | orchestrator | + ping -c3 192.168.112.163 2025-10-09 11:16:46.407431 | orchestrator | PING 192.168.112.163 (192.168.112.163) 56(84) bytes of data. 2025-10-09 11:16:46.407489 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=1 ttl=63 time=6.44 ms 2025-10-09 11:16:47.405152 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=2 ttl=63 time=2.44 ms 2025-10-09 11:16:48.407012 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=3 ttl=63 time=1.94 ms 2025-10-09 11:16:48.407086 | orchestrator | 2025-10-09 11:16:48.407093 | orchestrator | --- 192.168.112.163 ping statistics --- 2025-10-09 11:16:48.407099 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-10-09 11:16:48.407105 | orchestrator | rtt min/avg/max/mdev = 1.938/3.605/6.435/2.011 ms 2025-10-09 11:16:48.407707 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:16:48.407717 | orchestrator | + ping -c3 192.168.112.182 2025-10-09 11:16:48.419269 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2025-10-09 11:16:48.419280 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=6.73 ms 2025-10-09 11:16:49.417558 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=2.62 ms 2025-10-09 11:16:50.419446 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=2.71 ms 2025-10-09 11:16:50.419530 | orchestrator | 2025-10-09 11:16:50.419540 | orchestrator | --- 192.168.112.182 ping statistics --- 2025-10-09 11:16:50.419548 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-10-09 11:16:50.419555 | orchestrator | rtt min/avg/max/mdev = 2.623/4.019/6.729/1.916 ms 2025-10-09 11:16:50.420264 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:16:50.420280 | orchestrator | + ping -c3 192.168.112.160 2025-10-09 11:16:50.432082 | orchestrator | PING 192.168.112.160 (192.168.112.160) 56(84) bytes of data. 2025-10-09 11:16:50.432097 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=1 ttl=63 time=7.07 ms 2025-10-09 11:16:51.429108 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=2 ttl=63 time=2.56 ms 2025-10-09 11:16:52.430737 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=3 ttl=63 time=2.19 ms 2025-10-09 11:16:52.430831 | orchestrator | 2025-10-09 11:16:52.430846 | orchestrator | --- 192.168.112.160 ping statistics --- 2025-10-09 11:16:52.430860 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-09 11:16:52.430872 | orchestrator | rtt min/avg/max/mdev = 2.189/3.939/7.071/2.219 ms 2025-10-09 11:16:52.431326 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-10-09 11:16:52.431349 | orchestrator | + ping -c3 192.168.112.100 2025-10-09 11:16:52.445131 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2025-10-09 11:16:52.445156 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=8.45 ms 2025-10-09 11:16:53.442250 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=3.17 ms 2025-10-09 11:16:54.442513 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=2.10 ms 2025-10-09 11:16:54.442609 | orchestrator | 2025-10-09 11:16:54.442623 | orchestrator | --- 192.168.112.100 ping statistics --- 2025-10-09 11:16:54.442634 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-10-09 11:16:54.442645 | orchestrator | rtt min/avg/max/mdev = 2.101/4.573/8.449/2.775 ms 2025-10-09 11:16:54.892455 | orchestrator | ok: Runtime: 0:21:46.626487 2025-10-09 11:16:54.945894 | 2025-10-09 11:16:54.946016 | TASK [Run tempest] 2025-10-09 11:16:55.480278 | orchestrator | skipping: Conditional result was False 2025-10-09 11:16:55.497317 | 2025-10-09 11:16:55.497490 | TASK [Check prometheus alert status] 2025-10-09 11:16:56.032614 | orchestrator | skipping: Conditional result was False 2025-10-09 11:16:56.036782 | 2025-10-09 11:16:56.036945 | PLAY RECAP 2025-10-09 11:16:56.037077 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-10-09 11:16:56.037140 | 2025-10-09 11:16:56.258642 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-10-09 11:16:56.259949 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-10-09 11:16:56.994955 | 2025-10-09 11:16:56.995187 | PLAY [Post output play] 2025-10-09 11:16:57.010632 | 2025-10-09 11:16:57.010753 | LOOP [stage-output : Register sources] 2025-10-09 11:16:57.080602 | 2025-10-09 11:16:57.080964 | TASK [stage-output : Check sudo] 2025-10-09 11:16:57.901564 | orchestrator | sudo: a password is required 2025-10-09 11:16:58.122161 | orchestrator | ok: Runtime: 0:00:00.009311 2025-10-09 11:16:58.134707 | 2025-10-09 11:16:58.134886 | LOOP [stage-output : Set source and destination for files and folders] 2025-10-09 11:16:58.174823 | 2025-10-09 11:16:58.175091 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-10-09 11:16:58.236181 | orchestrator | ok 2025-10-09 11:16:58.241962 | 2025-10-09 11:16:58.242063 | LOOP [stage-output : Ensure target folders exist] 2025-10-09 11:16:58.662238 | orchestrator | ok: "docs" 2025-10-09 11:16:58.662677 | 2025-10-09 11:16:58.894901 | orchestrator | ok: "artifacts" 2025-10-09 11:16:59.134563 | orchestrator | ok: "logs" 2025-10-09 11:16:59.150189 | 2025-10-09 11:16:59.150335 | LOOP [stage-output : Copy files and folders to staging folder] 2025-10-09 11:16:59.172753 | 2025-10-09 11:16:59.172946 | TASK [stage-output : Make all log files readable] 2025-10-09 11:16:59.433082 | orchestrator | ok 2025-10-09 11:16:59.442129 | 2025-10-09 11:16:59.442258 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-10-09 11:16:59.476525 | orchestrator | skipping: Conditional result was False 2025-10-09 11:16:59.492699 | 2025-10-09 11:16:59.492830 | TASK [stage-output : Discover log files for compression] 2025-10-09 11:16:59.516750 | orchestrator | skipping: Conditional result was False 2025-10-09 11:16:59.531933 | 2025-10-09 11:16:59.532083 | LOOP [stage-output : Archive everything from logs] 2025-10-09 11:16:59.578749 | 2025-10-09 11:16:59.578938 | PLAY [Post cleanup play] 2025-10-09 11:16:59.587951 | 2025-10-09 11:16:59.588059 | TASK [Set cloud fact (Zuul deployment)] 2025-10-09 11:16:59.639897 | orchestrator | ok 2025-10-09 11:16:59.649288 | 2025-10-09 11:16:59.649413 | TASK [Set cloud fact (local deployment)] 2025-10-09 11:16:59.672458 | orchestrator | skipping: Conditional result was False 2025-10-09 11:16:59.684311 | 2025-10-09 11:16:59.684453 | TASK [Clean the cloud environment] 2025-10-09 11:17:00.734749 | orchestrator | 2025-10-09 11:17:00 - clean up servers 2025-10-09 11:17:01.437087 | orchestrator | 2025-10-09 11:17:01 - testbed-manager 2025-10-09 11:17:01.521464 | orchestrator | 2025-10-09 11:17:01 - testbed-node-3 2025-10-09 11:17:01.606906 | orchestrator | 2025-10-09 11:17:01 - testbed-node-0 2025-10-09 11:17:01.693932 | orchestrator | 2025-10-09 11:17:01 - testbed-node-2 2025-10-09 11:17:01.781927 | orchestrator | 2025-10-09 11:17:01 - testbed-node-5 2025-10-09 11:17:01.876741 | orchestrator | 2025-10-09 11:17:01 - testbed-node-1 2025-10-09 11:17:01.973847 | orchestrator | 2025-10-09 11:17:01 - testbed-node-4 2025-10-09 11:17:02.069163 | orchestrator | 2025-10-09 11:17:02 - clean up keypairs 2025-10-09 11:17:02.084886 | orchestrator | 2025-10-09 11:17:02 - testbed 2025-10-09 11:17:02.112655 | orchestrator | 2025-10-09 11:17:02 - wait for servers to be gone 2025-10-09 11:17:12.970552 | orchestrator | 2025-10-09 11:17:12 - clean up ports 2025-10-09 11:17:13.174873 | orchestrator | 2025-10-09 11:17:13 - 4c6549dd-0316-413c-8df6-216dd3dec3a7 2025-10-09 11:17:13.419486 | orchestrator | 2025-10-09 11:17:13 - 92cff657-ae28-425c-8487-feee0ee2feec 2025-10-09 11:17:13.677200 | orchestrator | 2025-10-09 11:17:13 - 93b44c7a-8304-4243-9145-192ac1cf1e0c 2025-10-09 11:17:14.095996 | orchestrator | 2025-10-09 11:17:14 - b2a29287-8407-442a-a1ba-82599ade77cb 2025-10-09 11:17:14.310497 | orchestrator | 2025-10-09 11:17:14 - b60a2301-d3c4-4a2c-86b9-78ef15cb7ddb 2025-10-09 11:17:14.536077 | orchestrator | 2025-10-09 11:17:14 - bec18f33-2d40-4454-a3f6-9574d05457b8 2025-10-09 11:17:14.757077 | orchestrator | 2025-10-09 11:17:14 - d3b6474c-b1cb-4f16-ad9b-cfe65a9b70ed 2025-10-09 11:17:14.963418 | orchestrator | 2025-10-09 11:17:14 - clean up volumes 2025-10-09 11:17:15.069266 | orchestrator | 2025-10-09 11:17:15 - testbed-volume-2-node-base 2025-10-09 11:17:15.107876 | orchestrator | 2025-10-09 11:17:15 - testbed-volume-3-node-base 2025-10-09 11:17:15.144092 | orchestrator | 2025-10-09 11:17:15 - testbed-volume-manager-base 2025-10-09 11:17:15.186617 | orchestrator | 2025-10-09 11:17:15 - testbed-volume-5-node-base 2025-10-09 11:17:15.226130 | orchestrator | 2025-10-09 11:17:15 - testbed-volume-0-node-base 2025-10-09 11:17:15.263680 | orchestrator | 2025-10-09 11:17:15 - testbed-volume-1-node-base 2025-10-09 11:17:15.308094 | orchestrator | 2025-10-09 11:17:15 - testbed-volume-7-node-4 2025-10-09 11:17:15.348191 | orchestrator | 2025-10-09 11:17:15 - testbed-volume-3-node-3 2025-10-09 11:17:15.391045 | orchestrator | 2025-10-09 11:17:15 - testbed-volume-4-node-4 2025-10-09 11:17:15.435007 | orchestrator | 2025-10-09 11:17:15 - testbed-volume-4-node-base 2025-10-09 11:17:15.477771 | orchestrator | 2025-10-09 11:17:15 - testbed-volume-5-node-5 2025-10-09 11:17:15.517237 | orchestrator | 2025-10-09 11:17:15 - testbed-volume-0-node-3 2025-10-09 11:17:15.557652 | orchestrator | 2025-10-09 11:17:15 - testbed-volume-1-node-4 2025-10-09 11:17:15.597158 | orchestrator | 2025-10-09 11:17:15 - testbed-volume-2-node-5 2025-10-09 11:17:15.638013 | orchestrator | 2025-10-09 11:17:15 - testbed-volume-6-node-3 2025-10-09 11:17:15.679617 | orchestrator | 2025-10-09 11:17:15 - testbed-volume-8-node-5 2025-10-09 11:17:15.719495 | orchestrator | 2025-10-09 11:17:15 - disconnect routers 2025-10-09 11:17:15.841787 | orchestrator | 2025-10-09 11:17:15 - testbed 2025-10-09 11:17:16.804364 | orchestrator | 2025-10-09 11:17:16 - clean up subnets 2025-10-09 11:17:16.856894 | orchestrator | 2025-10-09 11:17:16 - subnet-testbed-management 2025-10-09 11:17:17.005651 | orchestrator | 2025-10-09 11:17:17 - clean up networks 2025-10-09 11:17:17.186481 | orchestrator | 2025-10-09 11:17:17 - net-testbed-management 2025-10-09 11:17:17.460292 | orchestrator | 2025-10-09 11:17:17 - clean up security groups 2025-10-09 11:17:17.508444 | orchestrator | 2025-10-09 11:17:17 - testbed-management 2025-10-09 11:17:17.635085 | orchestrator | 2025-10-09 11:17:17 - testbed-node 2025-10-09 11:17:17.735914 | orchestrator | 2025-10-09 11:17:17 - clean up floating ips 2025-10-09 11:17:17.771963 | orchestrator | 2025-10-09 11:17:17 - 81.163.193.25 2025-10-09 11:17:18.230228 | orchestrator | 2025-10-09 11:17:18 - clean up routers 2025-10-09 11:17:18.334196 | orchestrator | 2025-10-09 11:17:18 - testbed 2025-10-09 11:17:19.248273 | orchestrator | ok: Runtime: 0:00:19.176362 2025-10-09 11:17:19.251679 | 2025-10-09 11:17:19.251820 | PLAY RECAP 2025-10-09 11:17:19.251933 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-10-09 11:17:19.251990 | 2025-10-09 11:17:19.376383 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-10-09 11:17:19.377435 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-10-09 11:17:20.079348 | 2025-10-09 11:17:20.079522 | PLAY [Cleanup play] 2025-10-09 11:17:20.095013 | 2025-10-09 11:17:20.095127 | TASK [Set cloud fact (Zuul deployment)] 2025-10-09 11:17:20.151791 | orchestrator | ok 2025-10-09 11:17:20.161173 | 2025-10-09 11:17:20.161311 | TASK [Set cloud fact (local deployment)] 2025-10-09 11:17:20.195643 | orchestrator | skipping: Conditional result was False 2025-10-09 11:17:20.210177 | 2025-10-09 11:17:20.210317 | TASK [Clean the cloud environment] 2025-10-09 11:17:21.299312 | orchestrator | 2025-10-09 11:17:21 - clean up servers 2025-10-09 11:17:21.768270 | orchestrator | 2025-10-09 11:17:21 - clean up keypairs 2025-10-09 11:17:21.784686 | orchestrator | 2025-10-09 11:17:21 - wait for servers to be gone 2025-10-09 11:17:21.829119 | orchestrator | 2025-10-09 11:17:21 - clean up ports 2025-10-09 11:17:22.412567 | orchestrator | 2025-10-09 11:17:22 - clean up volumes 2025-10-09 11:17:22.478559 | orchestrator | 2025-10-09 11:17:22 - disconnect routers 2025-10-09 11:17:22.507330 | orchestrator | 2025-10-09 11:17:22 - clean up subnets 2025-10-09 11:17:22.530815 | orchestrator | 2025-10-09 11:17:22 - clean up networks 2025-10-09 11:17:22.671842 | orchestrator | 2025-10-09 11:17:22 - clean up security groups 2025-10-09 11:17:22.708929 | orchestrator | 2025-10-09 11:17:22 - clean up floating ips 2025-10-09 11:17:22.733751 | orchestrator | 2025-10-09 11:17:22 - clean up routers 2025-10-09 11:17:23.249619 | orchestrator | ok: Runtime: 0:00:01.814537 2025-10-09 11:17:23.254437 | 2025-10-09 11:17:23.254597 | PLAY RECAP 2025-10-09 11:17:23.254719 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-10-09 11:17:23.254781 | 2025-10-09 11:17:23.372670 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-10-09 11:17:23.373659 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-10-09 11:17:24.099092 | 2025-10-09 11:17:24.099254 | PLAY [Base post-fetch] 2025-10-09 11:17:24.114310 | 2025-10-09 11:17:24.114460 | TASK [fetch-output : Set log path for multiple nodes] 2025-10-09 11:17:24.169178 | orchestrator | skipping: Conditional result was False 2025-10-09 11:17:24.175767 | 2025-10-09 11:17:24.175900 | TASK [fetch-output : Set log path for single node] 2025-10-09 11:17:24.206543 | orchestrator | ok 2025-10-09 11:17:24.212383 | 2025-10-09 11:17:24.212486 | LOOP [fetch-output : Ensure local output dirs] 2025-10-09 11:17:24.668779 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/8acddd485d59423196f19f4b453180c3/work/logs" 2025-10-09 11:17:24.953341 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/8acddd485d59423196f19f4b453180c3/work/artifacts" 2025-10-09 11:17:25.217574 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/8acddd485d59423196f19f4b453180c3/work/docs" 2025-10-09 11:17:25.245315 | 2025-10-09 11:17:25.245491 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-10-09 11:17:26.128346 | orchestrator | changed: .d..t...... ./ 2025-10-09 11:17:26.128621 | orchestrator | changed: All items complete 2025-10-09 11:17:26.128659 | 2025-10-09 11:17:26.835303 | orchestrator | changed: .d..t...... ./ 2025-10-09 11:17:27.532726 | orchestrator | changed: .d..t...... ./ 2025-10-09 11:17:27.552564 | 2025-10-09 11:17:27.552686 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-10-09 11:17:27.588479 | orchestrator | skipping: Conditional result was False 2025-10-09 11:17:27.591219 | orchestrator | skipping: Conditional result was False 2025-10-09 11:17:27.609450 | 2025-10-09 11:17:27.609550 | PLAY RECAP 2025-10-09 11:17:27.609622 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-10-09 11:17:27.609656 | 2025-10-09 11:17:27.727158 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-10-09 11:17:27.728152 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-10-09 11:17:28.439702 | 2025-10-09 11:17:28.440409 | PLAY [Base post] 2025-10-09 11:17:28.454642 | 2025-10-09 11:17:28.454764 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-10-09 11:17:29.369596 | orchestrator | changed 2025-10-09 11:17:29.378463 | 2025-10-09 11:17:29.378582 | PLAY RECAP 2025-10-09 11:17:29.378654 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-10-09 11:17:29.378730 | 2025-10-09 11:17:29.489925 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-10-09 11:17:29.490890 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-10-09 11:17:30.250998 | 2025-10-09 11:17:30.251160 | PLAY [Base post-logs] 2025-10-09 11:17:30.261344 | 2025-10-09 11:17:30.261514 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-10-09 11:17:30.702523 | localhost | changed 2025-10-09 11:17:30.712500 | 2025-10-09 11:17:30.712638 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-10-09 11:17:30.748949 | localhost | ok 2025-10-09 11:17:30.753493 | 2025-10-09 11:17:30.753629 | TASK [Set zuul-log-path fact] 2025-10-09 11:17:30.770211 | localhost | ok 2025-10-09 11:17:30.780782 | 2025-10-09 11:17:30.780901 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-10-09 11:17:30.807028 | localhost | ok 2025-10-09 11:17:30.812646 | 2025-10-09 11:17:30.812798 | TASK [upload-logs : Create log directories] 2025-10-09 11:17:31.310059 | localhost | changed 2025-10-09 11:17:31.314233 | 2025-10-09 11:17:31.314395 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-10-09 11:17:31.793954 | localhost -> localhost | ok: Runtime: 0:00:00.006839 2025-10-09 11:17:31.798276 | 2025-10-09 11:17:31.798414 | TASK [upload-logs : Upload logs to log server] 2025-10-09 11:17:32.330286 | localhost | Output suppressed because no_log was given 2025-10-09 11:17:32.332184 | 2025-10-09 11:17:32.332287 | LOOP [upload-logs : Compress console log and json output] 2025-10-09 11:17:32.378129 | localhost | skipping: Conditional result was False 2025-10-09 11:17:32.384913 | localhost | skipping: Conditional result was False 2025-10-09 11:17:32.391979 | 2025-10-09 11:17:32.392214 | LOOP [upload-logs : Upload compressed console log and json output] 2025-10-09 11:17:32.438046 | localhost | skipping: Conditional result was False 2025-10-09 11:17:32.438662 | 2025-10-09 11:17:32.442029 | localhost | skipping: Conditional result was False 2025-10-09 11:17:32.455808 | 2025-10-09 11:17:32.456060 | LOOP [upload-logs : Upload console log and json output]